extend attributes support needed from underlying fs
Supports NFS, Samba or native FUSE mount (NFS-Ganesha, QEMU)
Native apps can use libgfapi
NFSv3 supports natively and with Ganesha NFSv4 is also supported
NFS daemons run on each storage node and uses that is a pivot point for getting the data (LB can be used for NFS)
Samba has VFS pluging that usus libgfapi
Samba runs on each storage node
CTDB used for load balancing and clustering
Samba also pivot's the data access at the connected storage node
Volume type determines how data is placed (distributed, replicated etc..)
David-Meyer hash algo decided where storage is placed (hence no need for metadata DB)
Links are used if a new brick is added and data has not been rebalanced yet
First access is a two hop access but client caches information about the actual brick the data is stored at
Replicated volume is like RF2
Disperse volume use Erasure Coding
Example is 11 nodes with redundancy 3 (8+3)
Distribute replicate volume is the normal volume used as it provides both scaling and redundancy
pNFS support with NFS-Ganesha
Geo replication (async repl) does not support active/active usage at this time
Log is kept of changes and are synced in the background
S4 GlusterFS snapshot needs LVM2 thinly provisioned storage
Data tiering places data according to data access rate
User-serviceable snapshots use hidden .snaps directory (don't show in ls but can cd into it)
Backups seems to need more work, data not available for delta changes for example.
Friday 9-10:30 evaluating distributed fs performance (S4 GlusterFS and Ceph)