Yesterday morning in Barcelona, the day after the Gluster Summit, GlusterFS 3.7.0 got released. Close to 600 bug reports, a mix of feature requests, enhancements and problems have been addressed in a little over 1220 patches since July last year.
This release marks the end of life of the maintenance of GlusterFS 3.4.x, users of the 3.4 stable version should consider upgrading.
The upcoming Fedora 23 (currently Rawhide) will be the first release that comes with 3.7.x by default, older Fedora releases will be kept on their stable Gluster versions. Packages for different distributions and versions will become available shortly on download.gluster.org. Once packages are available, announcements will be made on the Gluster Users list.
These Release Notes below have been put together with help from many engineers that worked on features and testing. You can also find the release notes in the repositories along with the rest of the code.
For more information, refer here.
For more information refer here.
For more information refer here.
Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
For more information refer here.
For more information refer here.
This feature has been utilized to add support for inode quotas.
For more details about inode quotas, refer here.
For more information refer here.
For more information refer here.
For more information refer here.
For more information, see here.
For more information, see here.
For more information, see here.
Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
For more information, see here.
For more information, see the 'Resolution of split-brain from the mount point' section here.
This release marks the end of life of the maintenance of GlusterFS 3.4.x, users of the 3.4 stable version should consider upgrading.
The upcoming Fedora 23 (currently Rawhide) will be the first release that comes with 3.7.x by default, older Fedora releases will be kept on their stable Gluster versions. Packages for different distributions and versions will become available shortly on download.gluster.org. Once packages are available, announcements will be made on the Gluster Users list.
These Release Notes below have been put together with help from many engineers that worked on features and testing. You can also find the release notes in the repositories along with the rest of the code.
Major Changes and Features
Documentation about major changes and features is included in thedoc/features/
directory of GlusterFS repository.Bitrot Detection
Bitrot detection is a technique used to identify an “insidious” type of disk error where data is silently corrupted with no indication from the disk to the storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files.For more information, refer here.
Multi threaded epoll for performance improvements
Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement.For more information refer here.
Volume Tiering [Experimental]
Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification.For more information refer here.
Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
Trashcan
This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period.For more information refer here.
Efficient Object Count and Inode Quota Support
This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count.For more information refer here.
This feature has been utilized to add support for inode quotas.
For more details about inode quotas, refer here.
Pro-active Self healing for Erasure Coding
Gluster 3.7 adds pro-active self healing support for erasure coded volumes.Exports and Netgroups Authentication for NFS
This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports.For more information refer here.
GlusterFind
GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume.For more information refer here.
Rebalance Performance Improvements
Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files.For more information refer here.
NFSv4 and pNFS support
Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include:- Addition of upcall infrastructure for cache invalidation.
- Support for lease locks and delegations.
- Support for enabling Ganesha through Gluster CLI.
- Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA.
- NFS Ganesha Integration
- Upcall Infrastructure
- Gluster CLI for NFS Ganesha
- High Availability for NFS Ganesha
- pNFS support for Gluster
Snapshot Scheduling
With this enhancement, administrators can schedule volume snapshots.For more information, see here.
Snapshot Cloning
Volume snapshots can now be cloned to create a new writeable volume.For more information, see here.
Sharding [Experimental]
Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size.For more information, see here.
Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
RCU in glusterd
Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterdArbiter Volumes
Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening.For more information, see here.
Better split-brain resolution
split-brain resolutions can now be also driven by users without administrative intervention.For more information, see the 'Resolution of split-brain from the mount point' section here.
Geo-replication improvements
There have been several improvements in geo-replication for stability and performance. For more details, see here.Minor Improvements
- Message ID based logging has been added for several translators.
- Quorum support for reads.
- Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in
gluster snapshot list
- Support for
gluster volume get <volname>
added. - libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
Known Issues
- Enabling Bitrot on volumes with more than 2 bricks on a node is known to cause problems.
- Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
- The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
Edit# gluster volume set <volname> server.allow-insecure on
/etc/glusterfs/glusterd.vol
to contain this line:option rpc-auth-allow-insecure on
Post 1, restarting the volume would be necessary:
Post 2, restarting glusterd would be necessary:# gluster volume stop <volname>
# gluster volume start <volname>
or# service glusterd restart
# systemctl restart glusterd