Nixpanic's Blog

An other Gluster 3.8 Long-Term-Maintenance update with the 3.8.8 release

The Gluster team has been busy over the end-of-year holidays and this latest update to the 3.8 Long-Term-Maintenance release intends to fix quite a number of bugs. Packages have been built for many different distributions and are available from the download server. The release-notes for 3.8.8 have been included below for the ease of reference. All users on the 3.8 version are recommended to update to this current release.

Release notes for Gluster 3.8.8

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2, 3.8.3, 3.8.4, 3.8.5, 3.8.6 and 3.8.7 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 38 patches have been merged, addressing 35 bugs:
  • #1375849: [RFE] enable sharding with virt profile - /var/lib/glusterd/groups/virt
  • #1378384: log level set in glfs_set_logging() does not work
  • #1378547: Asynchronous Unsplit-brain still causes Input/Output Error on system calls
  • #1389781: build: python on Debian-based dists use .../lib/python2.7/dist-packages instead of .../site-packages
  • #1394635: errors appear in brick and nfs logs and getting stale files on NFS clients
  • #1395510: Seeing error messages [snapview-client.c:283:gf_svc_lookup_cbk] and [dht-helper.c:1666ht_inode_ctx_time_update] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x5d75c)
  • #1399423: GlusterFS client crashes during remove-brick operation
  • #1399432: A hard link is lost during rebalance+lookup
  • #1399468: Wrong value in Last Synced column during Hybrid Crawl
  • #1399915: [SAMBA-CIFS] : IO hungs in cifs mount while graph switch on & off
  • #1401029: OOM kill of nfs-ganesha on one node while fs-sanity test suite is executed.
  • #1401534: fuse mount point not accessible
  • #1402697: glusterfsd crashed while taking snapshot using scheduler
  • #1402728: Worker restarts on log-rsync-performance config update
  • #1403109: Crash of glusterd when using long username with geo-replication
  • #1404105: Incorrect incrementation of volinfo refcnt during volume start
  • #1404583: Upcall: Possible use after free when log level set to TRACE
  • #1405004: [Perf] : pcs cluster resources went into stopped state during Multithreaded perf tests on RHGS layered over RHEL 6
  • #1405130: `gluster volume heal split-brain' does not heal if data/metadata/entry self-heal options are turned off
  • #1405450: tests/bugs/snapshot/bug-1316437.t test is causing spurious failure
  • #1405577: [GANESHA] failed to create directory of hostname of new node in var/lib/nfs/ganesha/ in already existing cluster nodes
  • #1405886: Fix potential leaks in INODELK cbk in protocol/client
  • #1405890: Fix spurious failure in bug-1402841.t-mt-dir-scan-race.t
  • #1405951: NFS-Ganesha:Volume reset for any option causes reset of ganesha enable option and bring down the ganesha services
  • #1406740: Fix spurious failure in tests/bugs/replicate/bug-1402730.t
  • #1408414: Remove-brick rebalance failed while rm -rf is in progress
  • #1408772: [Arbiter] After Killing a brick writes drastically slow down
  • #1408786: with granular-entry-self-heal enabled i see that there is a gfid mismatch and vm goes to paused state after migrating to another host
  • #1410073: Fix failure of split-brain-favorite-child-policy.t in CentOS7
  • #1410369: Dict_t leak in dht_migration_complete_check_task and dht_rebalance_inprogress_task
  • #1410699: [geo-rep]: Config commands fail when the status is 'Created'
  • #1410708: glusterd/geo-rep: geo-rep config command leaks fd
  • #1410764: Remove-brick rebalance failed while rm -rf is in progress
  • #1411011: atime becomes zero when truncating file via ganesha (or gluster-NFS)
  • #1411613: Fix the place where graph switch event is logged

GlusterFS 3.8.5 is ready for consumption

An other month, an other GlusterFS 3.8 update! We're committed to fix reported bugs in the 3.8 Long-Term-Maintenance version, with monthly releases. Here is glusterfs-3.8.5 for increased stability. Packages for different distributions should be landing shortly.

Release notes for Gluster 3.8.5

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2, 3.8.3 and 3.8.4contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 27 patches have been merged, addressing 26 bugs:
  • #1373723: glusterd experiencing repeated connect/disconnect messages when shd is down
  • #1374135: Rebalance is not considering the brick sizes while fixing the layout
  • #1374280: rpc/xdr: generated files are filtered with a sed extended regex
  • #1374573: gluster fails to propagate permissions on the root of a gluster export when adding bricks
  • #1374580: Geo-rep worker Faulty with OSError: [Errno 21] Is a directory
  • #1374596: [geo-rep]: AttributeError: 'Popen' object has no attribute 'elines'
  • #1374610: geo-replication *changes.log does not respect the log-level configured
  • #1374627: Worker crashes with EINVAL errors
  • #1374632: [geo-replication]: geo-rep Status is not showing bricks from one of the nodes
  • #1374640: glusterfs: create a directory with 0464 mode return EIO error
  • #1375043: bug-963541.t spurious failure
  • #1375096: dht: Update stbuf from servers having layout
  • #1375098: Value of `replica.split-brain-status' attribute of a directory in metadata split-brain in a dist-rep volume reads that it is not in split-brain
  • #1375542: [geo-rep]: defunct tar process while using tar+ssh sync
  • #1375565: Detach tier commit is allowed when detach tier start goes into failed state
  • #1375959: Files not being opened with o_direct flag during random read operation (Glusterfs 3.8.2)
  • #1375990: Enable gfapi test cases in Gluster upstream regression
  • #1376385: /var/tmp/rpm-tmp.KPCugR: line 2: /bin/systemctl: No such file or directory
  • #1376390: Spurious regression in tests/basic/gfapi/bug1291259.t
  • #1377193: Poor smallfile read performance on Arbiter volume compared to Replica 3 volume
  • #1377290: The GlusterFS Callback RPC-calls always use RPC/XID 42
  • #1379216: rpc_clnt will sometimes not reconnect when using encryption
  • #1379284: warning messages seen in glusterd logs for each 'gluster volume status' command
  • #1379708: gfapi: Fix fd ref leaks
  • #1383694: GlusterFS fails to build on old Linux distros with linux/oom.h missing
  • #1383882: client ID should logged when SSL connection fails

GlusterFS 3.8.4 is available, Gluster users are advised to update

Even though the last release 3.8 was just two weeks ago, we're sticking to the release schedule and have 3.8.4 ready for all our current and future users. As with all updates, we advise users of previous versions to upgrade to the latest and greatest. Several bugs have been fixed, and upgrading is one way to prevent hitting known problems in future.

Release notes for Gluster 3.8.4

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2 and 3.8.3 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 23 patches have been merged, addressing 22 bugs:
  • #1332424: geo-rep: address potential leak of memory
  • #1357760: Geo-rep silently ignores config parser errors
  • #1366496: 1 mkdir generates tons of log messages from dht xlator
  • #1366746: EINVAL errors while aggregating the directory size by quotad
  • #1368841: Applications not calling glfs_h_poll_upcall() have upcall events cached for no use
  • #1368918: tests/bugs/cli/bug-1320388.t: Infrequent failures
  • #1368927: Error: quota context not set inode (gfid:nnn) [Invalid argument]
  • #1369042: thread CPU saturation limiting throughput on write workloads
  • #1369187: fix bug in protocol/client lookup callback
  • #1369328: [RFE] Add a count of snapshots associated with a volume to the output of the vol info command
  • #1369372: gluster snap status xml output shows incorrect details when the snapshots are in deactivated state
  • #1369517: rotated FUSE mount log is using to populate the information after log rotate.
  • #1369748: Memory leak with a replica 3 arbiter 1 configuration
  • #1370172: protocol/server: readlink rsp xdr failed while readlink got an error
  • #1370390: Locks xlators is leaking fdctx in pl_release()
  • #1371194: segment fault while join thread reaper_thr in fini()
  • #1371650: [Open SSL] : Unable to mount an SSL enabled volume via SMB v3/Ganesha v4
  • #1371912: gluster system:: uuid get hangs
  • #1372728: Node remains in stopped state in pcs status with "/usr/lib/ocf/resource.d/heartbeat/ganesha_mon: line 137: [: too many arguments ]" messages in logs.
  • #1373530: Minor improvements and cleanup for the build system
  • #1374290: "gluster vol status all clients --xml" doesn't generate xml if there is a failure in between
  • #1374565: [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume

The out-of-order GlusterFS 3.8.3 release addresses a usability regression

On occasion the Gluster projects deems an out-of-order release the best approach to address a problem that got introduced with the last update. The 3.8.3 version is such a release, and we advise all users to upgrade to it, if possible skipping the 3.8.2 release. See the included release notes for more details. We're sorry for any inconvenience caused.

Release notes for Gluster 3.8.3

This is a bugfix release. The
Release Notes for 3.8.0, 3.8.1 and 3.8.2 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Out of Order release to address a severe usability regression

Due to a major regression that was not caught and reported by any of the testing that has been performed, this release is done outside of the normal schedule.
The main reason to release 3.8.3 earlier than planned is to fix bug 1366813:
On restarting GlusterD or rebooting a GlusterFS server, only the bricks of the first volume get started. The bricks of the remaining volumes are not started. This is a regression caused by a change in GlusterFS-3.8.2.
This regression breaks automatic start of volumes on rebooting servers, and leaves the volumes inoperable. GlusterFS volumes could be left in an inoperable state after upgrading to 3.8.2, as upgrading involves restarting GlusterD.
Users can forcefully start the remaining volumes, by doing running the gluster volume start <name> force command.

Bugs addressed

A total of 24 patches have been merged, addressing 21 bugs:
  • #1357767: Wrong XML output for Volume Options
  • #1362540: glfs_fini() crashes with SIGSEGV
  • #1364382: RFE:nfs-ganesha:prompt the nfs-ganesha disable cli to let user provide "yes or no" option
  • #1365734: Mem leak in meta_default_readv in meta xlators
  • #1365742: inode leak in brick process
  • #1365756: [SSL] : gluster v set help does not show ssl options
  • #1365821: IO ERROR when multiple graph switches
  • #1365864: gfapi: use const qualifier for glfs_*timens()
  • #1365879: [libgfchangelog]: If changelogs are not available for the requested time range, no proper error message
  • #1366281: glfs_truncate missing
  • #1366440: [AFR]: Files not available in the mount point after converting Distributed volume type to Replicated one.
  • #1366482: SAMBA-DHT : Crash seen while rename operations in cifs mount and windows access of share mount
  • #1366489: "heal info --xml" not showing the brick name of offline bricks.
  • #1366813: Second gluster volume is offline after daemon restart or server reboot
  • #1367272: [HC]: After bringing down and up of the bricks VM's are getting paused
  • #1367297: Error and warning messages related to xlator/features/snapview-client.so adding up to the client log on performing IO operations
  • #1367363: Log EEXIST errors at DEBUG level
  • #1368053: [geo-rep] Stopped geo-rep session gets started automatically once all the master nodes are upgraded
  • #1368423: core: use for makedev(3), major(3), minor(3)
  • #1368738: gfapi-trunc test shouldn't be .t

The GlusterFS 3.8.2 bugfix release is available

Pretty much according to the release schedule, GlusterFS 3.8.2 has been released this week. Packages are available in the standard repositories, and moving from testing-status in different distributions to normal updates.

Release notes for Gluster 3.8.2

This is a bugfix release. The Release Notes for 3.8.0 and 3.8.1 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 54 patches have been merged, addressing 50 bugs:
  • #1339928: Misleading error message on rebalance start when one of the glusterd instance is down
  • #1346133: tiering : Multiple brick processes crashed on tiered volume while taking snapshots
  • #1351878: client ID should logged when SSL connection fails
  • #1352771: [DHT]: Rebalance info for remove brick operation is not showing after glusterd restart
  • #1352926: gluster volume status client" isn't showing any information when one of the nodes in a 3-way Distributed-Replicate volume is shut down
  • #1353814: Bricks are starting when server quorum not met.
  • #1354250: Gluster fuse client crashed generating core dump
  • #1354395: rpc-transport: compiler warning format string
  • #1354405: process glusterd set TCP_USER_TIMEOUT failed
  • #1354429: [Bitrot] Need a way to set scrub interval to a minute, for ease of testing
  • #1354499: service file is executable
  • #1355609: [granular entry sh] - Clean up (stale) directory indices in the event of an rm -rf and also in the normal flow while a brick is down
  • #1355610: Fix timing issue in tests/bugs/glusterd/bug-963541.t
  • #1355639: [Bitrot]: Scrub status- Certain fields continue to show previous run's details, even if the current run is in progress
  • #1356439: Upgrade from 3.7.8 to 3.8.1 doesn't regenerate the volfiles
  • #1357257: observing " Too many levels of symbolic links" after adding bricks and then issuing a replace brick
  • #1357773: [georep]: If a georep session is recreated the existing files which are deleted from slave doesn't get sync again from master
  • #1357834: Gluster/NFS does not accept dashes in hostnames in exports/netgroups files
  • #1357975: [Bitrot+Sharding] Scrub status shows incorrect values for 'files scrubbed' and 'files skipped'
  • #1358262: Trash translator fails to create 'internal_op' directory under already existing trash directory
  • #1358591: Fix spurious failure of tests/bugs/glusterd/bug-1111041.t
  • #1359020: [Bitrot]: Sticky bit files considered and skipped by the scrubber, instead of getting ignored.
  • #1359364: changelog/rpc: Memory leak- rpc_clnt_t object is never freed
  • #1359625: remove hardcoding in get_aux function
  • #1359654: Polling failure errors getting when volume is started&stopped with SSL enabled setup.
  • #1360122: Tiering related core observed with "uuid_is_null () message".
  • #1360138: [Stress/Scale] : I/O errors out from gNFS mount points during high load on an erasure coded volume,Logs flooded with Error messages.
  • #1360174: IO error seen with Rolling or non-disruptive upgrade of an distribute-disperse(EC) volume from 3.7.5 to 3.7.9
  • #1360556: afr coverity fixes
  • #1360573: Fix spurious failures in split-brain-favorite-child-policy.t
  • #1360574: multiple failures of tests/bugs/disperse/bug-1236065.t
  • #1360575: Fix spurious failures in ec.t
  • #1360576: [Disperse volume]: IO hang seen on mount with file ops
  • #1360579: tests: ./tests/bitrot/br-stub.t fails intermittently
  • #1360985: [SNAPSHOT]: The PID for snapd is displayed even after snapd process is killed.
  • #1361449: Direct io to sharded files fails when on zfs backend
  • #1361483: posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop
  • #1361665: Memory leak observed with upcall polling
  • #1362025: Add output option --xml to man page of gluster
  • #1362065: tests: ./tests/bitrot/bug-1244613.t fails intermittently
  • #1362069: [GSS] Rebalance crashed
  • #1362198: [tiering]: Files of size greater than that of high watermark level should not be promoted
  • #1363598: File not found errors during rpmbuild: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py{c,o}
  • #1364326: Spurious failure in tests/bugs/glusterd/bug-1089668.t
  • #1364329: Glusterd crashes upon receiving SIGUSR1
  • #1364365: Bricks doesn't come online after reboot [ Brick Full ]
  • #1364497: posix: honour fsync flags in posix_do_zerofill
  • #1365265: Glusterd not operational due to snapshot conflicting with nfs-ganesha export file in "/var/lib/glusterd/snaps"
  • #1365742: inode leak in brick process
  • #1365743: GlusterFS - Memory Leak - High Memory Utilization

First stable update for 3.8 is available, GlusterFS 3.8.1 fixes several bugs

The initial release of Gluster 3.8 was the start of a new Long-Term-Maintenance version with monthly updates. These updates include bugfixes and stability improvements only, making it a version that can safely be installed in production environments. It is planned that the Long-Term-Maintenance versions receive updates for a year. With minor releases going to happen every three months, the upcoming 3.9 version will be a Short-Term-Maintenance with updates until the next version is released three months later.
GlusterFS 3.8.1 has been released a week ago, and in the mean time packages for many distributions have been made available. We recommend all our 3.8.0 users to upgrade to 3.8.1. Environments that run on 3.6.x should consider an upgrade path in the next months, 3.6 will be End-Of-Life when 3.9 is released.

Release notes for Gluster 3.8.1

This is a bugfix release. The
Release Notes for 3.8.0 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 35 patches have been sent, addressing 32 bugs:
  • #1345883: [geo-rep]: Worker died with [Errno 2] No such file or directory
  • #1346134: quota : rectify quota-deem-statfs default value in gluster v set help command
  • #1346158: Possible crash due to a timer cancellation race
  • #1346750: Unsafe access to inode->fd_list
  • #1347207: Old documentation link in log during Geo-rep MISCONFIGURATION
  • #1347355: glusterd: SuSE build system error for incorrect strcat, strncat usage
  • #1347489: IO ERROR when multiple graph switches
  • #1347509: Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
  • #1347524: NFS+attach tier:IOs hang while attach tier is issued
  • #1347529: rm -rf to a dir gives directory not empty(ENOTEMPTY) error
  • #1347553: O_DIRECT support for sharding
  • #1347590: Ganesha+Tiering: Continuous "0-glfs_h_poll_cache_invalidation: invalid argument" messages getting logged in ganesha-gfapi logs.
  • #1348055: cli core dumped while providing/not wrong values during arbiter replica volume
  • #1348060: Worker dies with [Errno 5] Input/output error upon creation of entries at slave
  • #1348086: [geo-rep]: Worker crashed with "KeyError: "
  • #1349274: [geo-rep]: If the data is copied from .snaps directory to the master, it doesn't get sync to slave [First Copy]
  • #1349711: [Granular entry sh] - Implement renaming of indices in index translator
  • #1349879: AFR winds a few reads of a file in metadata split-brain.
  • #1350326: Protocol client not mounting volumes running on older versions.
  • #1350785: Add relative path validation for gluster copy file utility
  • #1350787: gfapi: in case of handle based APIs, close glfd after successful create
  • #1350789: Buffer overflow when attempting to create filesystem using libgfapi as driver on OpenStack
  • #1351025: Implement API to get page aligned iobufs in iobuf.c
  • #1351151: ganesha.enable remains on in volume info file even after we disable nfs-ganesha on the cluster.
  • #1351154: nfs-ganesha disable doesn't delete nfs-ganesha folder from /var/run/gluster/shared_storage
  • #1351711: build: remove absolute paths from glusterfs spec file
  • #1352281: Issues reported by Coverity static analysis tool
  • #1352393: [FEAT] DHT - rebalance - rebalance status o/p should be different for 'fix-layout' option, it should not show 'Rebalanced-files' , 'Size', 'Scanned' etc as it is not migrating any files.
  • #1352632: qemu libgfapi clients hang when doing I/O
  • #1352817: [scale]: Bricks not started after node reboot.
  • #1352880: gluster volume info --xml returns 0 for nonexistent volume
  • #1353426: glusterd: glusterd provides stale port information when a volume is recreated with same brick path

GlusterFS 3.5.9 is available, would it be the last 3.5 release?

There has been a delay in announcing the most recent 3.5 release, I'm sorry about that! Packages for most distributions are available by now, either from the standard distribution repositories (NetBSD) or from download.gluster.org.

We are working hard to release the next major version of Gluster. The roadmap for Gluster 3.8 is getting more complete every day. There is still some work to do though. A reminder for all users of the 3.5 stable series; when GlusterFS 3.8 is released, the 3.5 version will become unmaintained. We do our best to maintain three versions of Gluster, with the 3.8 release it will be 3.8, 3.7 and 3.6. Users still running version 3.5 are highly encouraged to start planning their upgrade process. If there are no critical problems reported against the 3.5 version, and no patches get sent, 3.5.9 might well be the last 3.5 release.

Release Notes for GlusterFS 3.5.9

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, 3.5.6, 3.5.7 and 3.5.8contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1313968: Request for XML output ignored when stdin is not a tty
  • 1315559: SEEK_HOLE and SEEK_DATA should return EINVAL when protocol support is missing

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

GlusterFS 3.5.8 is out, two bugs fixed in this stable release update

Last month GlusterFS 3.5.8 was tagged for release in our git repository. The tarball got placed on our main distribution server, and some packages got built for different distributions. Because releases and packages are mostly done by volunteers in their free time, it sometimes takes a little longer to get all the packages for different distributions available. Please be patient until the release has been made completely (at that point we'll update the 3.5/LATEST symlink). If you are interested in helping out with the packaging for a certain distribution or project, send your introduction and offer for assistance to our packaging mailinglist.

Release Notes for GlusterFS 3.5.8

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, 3.5.6 and 3.5.7 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1117888: Problem when enabling quota : Could not start quota auxiliary mount
  • 1288195: log improvements:- enabling quota on a volume reports numerous entries of "contribution node list is empty which is an error" in brick logs

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

GlusterFS 3.5.7 has been released

Around the 10th of each month the release schedule allows for a 3.5 stable update. This got delayed a few days due to the unfriendly weather in The Netherlands, making me take some holidays in a more sunny place in Europe.

This release fixes two bugs, one is only a minor improvement for distributions using systemd, the other fixes a potential client-side segfault when the server.manage-gids option is used. Packages for different distributions are available on the main download server, distributions that still provide glusterfs-3.5 packages should get updates out shortly too.


Hurry up, only a few days left to do the 2015 Gluster Community Survey

The Gluster Community provides packages for Fedora, CentOS, Debian, Ubuntu, NetBSD and other distributions. All users are important to us, and we really like to hear how Gluster is (not?) working out for you, or what improvements are most wanted. It is easy to pass this information (anonymously) along through this years survey (it's a Google form).

If you would like to comment on the survey itself, please get in touch with Amye.