We are working hard to release the next major version of Gluster. The roadmap for Gluster 3.8 is getting more complete every day. There is still some work to do though. A reminder for all users of the 3.5 stable series; when GlusterFS 3.8 is released, the 3.5 version will become unmaintained. We do our best to maintain three versions of Gluster, with the 3.8 release it will be 3.8, 3.7 and 3.6. Users still running version 3.5 are highly encouraged to start planning their upgrade process. If there are no critical problems reported against the 3.5 version, and no patches get sent, 3.5.9 might well be the last 3.5 release.
Release Notes for GlusterFS 3.5.9
This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, 3.5.6, 3.5.7 and 3.5.8contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.Bugs Fixed:
- 1313968: Request for XML output ignored when stdin is not a tty
- 1315559: SEEK_HOLE and SEEK_DATA should return EINVAL when protocol support is missing
Known Issues:
- The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
gluster volume set <volname> server.allow-insecure on
- restarting the volume is necessary
gluster volume stop <volname>
gluster volume start <volname> - Edit
/etc/glusterfs/glusterd.vol
to contain this line:
option rpc-auth-allow-insecure on
- restarting glusterd is necessary
service glusterd restart
- For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
gluster volume set <volname> performance.open-behind disabled
- libgfapi clients calling
glfs_fini
before a successfulglfs_init
will cause the client to hang as reported here. The workaround is NOT to callglfs_fini
for error cases encountered before a successfulglfs_init
. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline. - If the
/var/run/gluster
directory does not exist enabling quota will likely fail (Bug 1117888).