GlusterFS 3.5.3beta1 has been released for testing

The first beta for GlusterFS 3.5.3 is now available for download.

Packages for different distributions will land on the download server over the next few days. When packages become available, the package maintainers will send a notification to the gluster-users mailinglist.

With this beta release, we make it possible for bug reporters and testers to check if issues have indeed been fixed. All community members are invited to test and/or comment on this release.

If a bug from the list below has not been sufficiently fixed, please open the bug report, leave a comment with details of the testing and change the status of the bug to ASSIGNED.

In case someone has successfully verified a fix for a bug, please change the status of the bug to VERIFIED.

The Release Notes for 3.5.0, 3.5.1 and 3.5.2 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1081016: glusterd needs xfsprogs and e2fsprogs packages
  • 1129527: DHT :- data loss - file is missing on renaming same file from multiple client at same time
  • 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists"
  • 1132391: NFS interoperability problem: stripe-xlator removes EOF at end of READDIR
  • 1133949: Minor typo in afr logging
  • 1136221: The memories are exhausted quickly when handle the message which has multi fragments in a single record
  • 1136835: crash on fsync
  • 1138922: DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories
  • 1139103: DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing
  • 1139170: DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file
  • 1139245: vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process)
  • 1140338: rebalance is not resulting in the hash layout changes being available to nfs client
  • 1140348: Renaming file while rebalance is in progress causes data loss
  • 1140549: DHT: Rebalance process crash after add-brick and `rebalance start' operation
  • 1140556: Core: client crash while doing rename operations on the mount
  • 1141558: AFR : "gluster volume heal <volume_name> info" prints some random characters
  • 1141733: data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back
  • 1142052: Very high memory usage during rebalance
  • 1142614: files with open fd's getting into split-brain when bricks goes offline and comes back online
  • 1144315: core: all brick processes crash when quota is enabled
  • 1145000: Spec %post server does not wait for the old glusterd to exit
  • 1147243: nfs: volume set help says the rmtab file is in "/var/lib/glusterd/rmtab"

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>

    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
      More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> disabled
  • libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successfull glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).
Share Comments
comments powered by Disqus