Skip to content

Glusterfs Read-only file system after fuzzing testing. #4164

@SecTechTool

Description

@SecTechTool

Description of problem:
I created an gluster cluster (three nodes) and use one client node mount them. All operation works normal. However, after a period of fuzzing test, the write operation in client node fail, with log Read-only file system.

The exact command to reproduce the issue:

  1. First create a 3-nodes gluster servers (gluster1, gluster2 and gluster3)
    gluster volume create gv0 replica 3 192.168.102.186:/data/brick1/gv0 192.168.103.61:/data/brick1/gv0 192.168.102.34:/data/brick1/gv0

  2. Then in client, mount glusterfs
    mount -t glusterfs 192.168.102.186:/gv0 /mnt/gluster-test

  3. Finally, run fuzzing testing (SyzKaller) in client node, the fuzzer constantly generates programs that consist a set of file related syscalls. The key test input is:

    open_by_handle_at(0xffffffffffffffff, &(0x7f0000000000)=@reiserfs_4={0x10, 0x4, {0x1f, 0x0, 0x8001, 0x7f}}, 0x200000)
    ioctl$EVIOCGKEY(0xffffffffffffffff, 0x80404518, &(0x7f0000000040)=""/74)
    ioctl$F2FS_IOC_MOVE_RANGE(0xffffffffffffffff, 0xc020f509, &(0x7f00000000c0)={<r0=>0xffffffffffffffff, 0x5, 0x2, 0x1})
    ioctl$EVIOCGMASK(r0, 0x80104592, &(0x7f0000000200)={0x0, 0xc9, &(0x7f0000000100)="98a509036e05c19141f41b7cdfba0a08e64c97b683ad821be08bbd3c91d6af2efc6ccb5411fbee5dfdda38d2352beeed379737cd06e70f04418585f2c36c7b41885df0668dc966e904f3884ef07c330eadc5f04eba2ccba71b0933d72496417500a74f6bbd429f59abab79f26e90058dd3bdd745513a042dd262d47630f7d1c37b16809e818e0e30a783a62f2209d1ba694bf8cc9e5731749946bcbe5a2228d10ce7859b8906b0cd8159c00eea8181849302922cefa7863e8bb4a16a8f8dbc8e02aac4acd52d645bec"})
    ioctl$EVIOCRMFF(r0, 0x40044581, &(0x7f0000000240)=0x9)
    r1 = openat$ubi_ctrl(0xffffffffffffff9c, &(0x7f0000000280), 0x480302, 0x0)
    write$evdev(r1, &(0x7f00000002c0)=[{{}, 0x4, 0x7, 0x3}], 0x18)
    r2 = openat$selinux_mls(0xffffffffffffff9c, &(0x7f0000000300), 0x0, 0x0)
    newfstatat(0xffffffffffffff9c, &(0x7f0000000340)='./file0\x00', &(0x7f0000000380)={0x0, 0x0, 0x0, 0x0, <r3=>0x0, <r4=>0x0}, 0x800)
    ioctl$AUTOFS_DEV_IOCTL_REQUESTER(r0, 0xc018937b, &(0x7f0000000400)={{0x1, 0x1, 0x18, <r5=>r2, {<r6=>r3, <r7=>0xee01}}, './file0\x00'})
    chdir(&(0x7f0000000440)='./file0\x00')
    r8 = openat$selinux_commit_pending_bools(0xffffffffffffff9c, &(0x7f0000000480), 0x1, 0x0)
    r9 = dup(r2)
    r10 = ioctl$UDMABUF_CREATE(r8, 0x40187542, &(0x7f00000004c0)={r9, 0x0, 0x4000, 0xfffffffff0000000})
    fcntl$setpipe(r8, 0x407, 0x6)
    ioctl$EVIOCGSND(r2, 0x8040451a, &(0x7f0000000500)=""/181)
    r11 = dup(r9)
    ioctl$EVIOCSABS20(r11, 0x401845e0, &(0x7f00000005c0)={0x9, 0x400, 0x3, 0x7, 0x1, 0x3})
    ioctl$INOTIFY_IOC_SETNEXTWD(r5, 0x40044900, 0x2)
    ioctl$INCFS_IOC_FILL_BLOCKS(r10, 0x80106720, &(0x7f0000001740)={0x2, &(0x7f0000001700)=[{0x1, 0xec, &(0x7f0000000600)="e341d4bb353a306dd29d49c57acb934055fe6b4f0fb3131e1dcfcb47aa4df5da9ff51b3c7da1c5df1c009ce8230f27daeb2a4f86c1b3fddc4ceee1d1e289cf606a99eca50971f87e7ae4bf0571c5da492c5a1ad7ab808290456bb05eb356f49ad22d5570265109003d0b0ed3165dcbee253c693bb2a960d6e97e55792fd58121c947bf5312ba05155764e2334bd40709223fb58363f64ab1a301afc5fccb759a5619bf02c1ba2329836382087dd722bcc5d728f41c1a0feb62355e743710f5717d648e4d29f6eba04353665aeb849d4410b1751bbb3b3cc58ea4868331b9bcf9459e6004b595dc07b8c26d56", 0x0, 0x1}, {0x8, 0x1000, &(0x7f0000000700)="83a3ae1d7e03a05b52b50d75f292c81ecb8f21940e61e3e92326f37cd0c77d48c3b8e85c98ce28bc439560ee174bef32a0beda44c69d1a0c9cdafe30d410c48342174519f99454754853600a2e50e5349e903cabf832778071409377a94cfb3e7f145b465bcdc05dac049a2809cfd12051c843e3b128d5d7191dfb7e6e5b2a3cc1dccdd7b88f15652cc60c75e9885985eed9303d04f07809923ba0f67fb2af1f1ff255bb528116776530dc7482cf6bc4ba0cb4935ee5ba6718f691602d8843fc1a13cb43b45a0b7f1c1d72ddf5c600f1cac65c70ea6473cfb09cec9364972c578edfda5292ca11e3dac586b896821b5af9f1d715066340b81fd5c8409b31967b27adeb8ac643b81ff39783b0773907cfb4bd0a239ac0b914ce90d435e8cc3077119224034817b14a1364fe24237c12ddcd55c240c3619b104a388d156e001c6afdb90366c2af3a0e5e5544c340640a1942e3623a0c8ff6bbc8917a4ab743e89d319c5d288e3a0632b6ac71004a4696c1b94ba79c9b5a390f6d6018b945d89a18ca9fbe1d83a7441155bbe2da9b6ef8125fd104a7c3e2df9c27dfc3e9c1d86a2b6778a991a79ddd1d1303341e706472b490e4cc9fe7ad9bee23e6b36dbaee80c97810004e668b3b2a8e69ff919d3897171377deba69f1c34631c57e48c6741b7a209f477e393224c6e26ba201eac97a60a363e7e474d27f9547d10f385b9731e157c467726b1f224279d9f8f2c3ffb49ff9fecfa7509e6bac3b0ea76f13ad987c06c4b37ba0fe4f063dc57b75793c338a63952f6813fa8ccd064ea1c3b2c803960e8711bb8661c19fc5df1595cd316d72d88c40c333c7f5c7683dd8147154e78dbdf100e13eadea4971d6066cdd7f61b32d52306b8a72242a59a92942c8c88f5a79a2a8406ac4167ca4ce4e3005350a3bd7a5e384bb9db4e0eb53ebb7eb60f23a2aa4ebfd3b175827ef87e9f13aa70d08bba423a2c41e1e4f5da67bdd6b6ca2f2c8ec2a286ca70b48f2bd0ebc3c8b0ed292bbf4b2773b63544889e86decc87559f4e04e964a7e56ef0c3e3b67d11ff90a5451dc4c6fb13bb601836122b499e89952f91e39658b4ae5c624469208c75cebed0b58c65826873b390a39267b06fbceb2c44284025b1d87caadeb9d35139e624263349de013ce00a32a58e00ca694c16a3de8c23225dd67928c073745ce22fc87e69c181fc619e1748e1fb1c663ac9520c3160607ece6e85fecee504d78249c4bc8149ba1f7c1b9da450cee390df77345515579823bbd378e57cd4cbedaa55caf3ce36ba105d560b532e3bd3021a008f4db0ba534f5ab123630172b1bf579341e8ddff78d32d321b869d1a13422453ea419cdb2dccb51d5fbcfa6c624012d5b3e7088c682a5b678a80a775494d7a4d97150d8775abc4cb8102b6484dbe4dad52af8871179291a7dced027fc403b4b6182fde384ea4cce4eb935a2d186274e797e1e9805e701796dcb799100808991d4fcee5abe51935015a5389cd75e121e0e45907fab75c5a14f4d48f6b111518ebf028ae26dcf05dab06b452b43a98706382451215c95b2ba2539a2c563cf77af7ac1ef580738420993ba6d80e2b07a02a593f06a66a4729922f6075b01fde6f42b60fe43b599f1fdff7e55ba862b6c985a0974ba69730f09c3862f1a8015c170b4e3c1f869cd63a59b40cac7b0ee9d947894359f3f6d29bd144f0f448880028337251f6e1da4777c45f7bfd6af2196d57291d484572500f5aa7e24116f8eb45ba7fee6b1f28b99225b46f73a7c899f127b7e55d594140e9d380cfbe24fe4ce1f057a15f6c329d1510904b8c161a44d440705a69fa8a3099b2cf534130d3e281a1765a94605adf2496c2ada35ce8b8f5b238ff0a0d1fab5e8999f8da6899fd46b12ef9426c92accb5ae8e88b0c7d10a5be211dfd6511f0bbbdc34cd67e0dc91215931438b87e7df9780497f34ce2a2b639d8619b5b02dbf7d429ef493fbe9ab49b56ad5746dc2c1fa63d0e2a491159b49378e959b24c968f1a129d735c72cbf2e0297f137ec8e51769b508d67604f37bdf89b7fd9172fe6441a29607b8d149b32b43bab3ba62b54121d003e5f9100a49fceff3abb10156531568a04ae0fbc18091ac4e74d0793e23783014c71cf4934100ae4e15252f860f9d78d29b46b9d29a7075689fa0989c020d3682327fe455e023e93590d83f6fdf20a73d659f684e98be820d2587c73f5de96fb9cfa380e6c845887ed70c9aa646bf277a22ec9a62f68cd746867fb8d466df8c40412826d7aa9960a77a69ff24449847fb685f0031d6bdde0f564c09eed5a7412d69eed082b0a9f77ca487288614c63f9da7fb557bcff5f8be0d807447ed5227b3accf20e07ba8f45e981ea32631147acc693b5e2a35a4bc171036b29a4195de6551adf4f33f75353b94ecc9f4976b03e15f037d8feb27fe7f387115a387aff87aa432b081741c6a4a611e018c57609245c3f5d77f5cfffcc43f055021364b0ac458ad97c5472138f7e50c1169ca5beb31ccc8691bae9f4cbdc4a1fb69dfb3aa254daa30931a23b3df5c9673247d470e3adaba701c895285e0176b2fb7242eac949857b8d595c1dbda42a7e7d689f5cbb0d0b89ec6913823f39e0fb62cdbfb2e9b21eb768b8abe72249fc7c092b7f79f3a753f6508d8023ac013ad071444e4e3c4017293d47247184809d07c9391a4fa925e3dd1622426f49bed0f565d2cd773d2fa90a45b3df8f502cb2d3bc2c1874b021c36cc7791aa9236cb2abb6e7129fd05d8e65c03d856c2b98c32276e5804d2a4e35f770708c2996df4036ab36c9bf2f72e41b623b9c38989da4b5d1d77b18f41aa62cd751ccab9c1913d3a1132846ebc9ed54f8c02c5e956868b9bdf3b48e88e20db799aa570a04c76bdf128d38f54b1b7d1bbe4fb38474a5f47a05fab7259b1de2ee3e3ce6dde6789bf5bc4565d5ae433eac4ca096946219118c182dc9cdee77584293967420040bacbd60b5eca1a92ddd54befc74f763918cd85c5ca54ddc9e1dbce110c002711d52863d2afee89d1aa5985453cc137db98430cd644ec22496379c13c9304443ddfc10fca8d016eba35df1499b7bca5db9877b5ad0c444196bf84f3aae8154f7b7273f1a2924dc79c3a431e770102421ca0ca5ef006b9155554e92822c1b65618f596110d1599223c93c3d227b1004f4df51c997f799c39aae283cd7629b271871a4e834a6faf653c00db43492e011941d044637b25bd50d7f7fcff7530ceed098d4925dd6519ae3a2c15cea90feabc352930a28883d760b3c52561a70ce9a78fcb64eb380f93fd0de49ea057f028b84f858b4ba324e8a31eda0b6a11ca063d8536e139f7ddd76137951d59605aa049e55ccc3590573a0be75b36e0fa8f140d4adb04276589647a2b64126ee834bfdf83a8485cd119ae29549c2a661ea7b2a15f46c80a3104e170baa11c4f56fe6ede96d5b79799c048c23f72d495b5d1dba033db5ea481982aef738236108e695a39ef334316a99a31884ccb9f4bf6794bc255b9b7de93e4ea8d26d7e8bdaa69d9fe5fe97239b2b2003b807a6536065eb59c3c4d5b79b656fd7f243b96defb7f1eb53ca4460144c1bab18172dd42e4e3c047e47c8bb2e4a522b329ce43f0c7dc0388793bb4b77de237ce9189a32cd77ba948a04883f41f2476de8ef58b492b6a967381db288262e99ed5a770b213cd22b9ea597ccfac8e3793a531e429977ce7a35cc400d2b07530b65592030ea80ee10aecb0934de5092fbbebb3c9172bdc22d66e5c827b63c2d7e326484a58062d5613276943040cb32e64825d4f50e46f77fde43268084ad6ddd3d8f64eca55b6a90ebf9952bbb2e653eadb9004ceb46d055814c8f161146ed665f4c14f40471c9e806c0f079715b6981403329650e488b7b1276aa8c69e23800733f0a0992550aebe07ce20f94cb6b410ca616f9f0eedb3048134b3f2f8860aa8842eb93f41b700ce8a37b0a5e893929946ebc4814453e735dce8aedb8bc99dca781802e676b28e321c7e0f5cc1cd07047452656bdaedd7cae73dccd2037dbb6e21c112b30162ddec1d8dc05e62e2125ecae672b40b31ce3c0bcaadd04ad23b987b9d2ed39ed84bd4bcea1d26b0259eeee081a7c1a7a7e99d3015e145c678e0806fb0d92ace38b5da5629c16db36366c4e10ae6fab006560e52bcf49ea7319bc1836c68b972be65b5731b13a1f018be3f0b6e03983ab056e2f40c55885c1a105fc236d6f5b933810292efcd18c6add3408158d3be288bdc6e04145764783f54c1789f5d0c6f8c3b69006e22b307930500c1d51cfff023cf8d44ed85c8831d0f76504d460544d0b963386c83587480b0ea1c0664c9bdab014aadbd8ac75f82d51ad6128b1087cdafb99b96a08d7f60a42af14ff0c080d08c7c9205f0490f73dd3294eec79b72299373f7348936986a6b2a825d57e1be80719328c1d3c1a334acd3d03db3b5a8ef020620784650c4fff4feb239d12fab7adbc44de89ec494925d5d707a24d4a92351223d78a9d554eaffbf1f653c224638cf395c214243c09cf23c6c51bcfab7d2b04067df8c911ceed531036785f0266a51c28973fdc42720cf03d11e6cab971bbb4393975a4a917bc60c138b5df2d1ec09ef5f156b30ebae80f998dba0e9db8eaf03d74728641796a3e71eb7f0417be78a2f9b78666d89d576edf700af62a699228f16a7a948aee6f7eb16650c0a724a38ce2c39dda7b75af6a5ab5b56bbe64c2b7d59b554907df7a6d84d7c737fa94f7ec714445d3ea0ff77e82b91d926213f9e944ab2ff3cffc384a7a3f47dc679352e24a8bb2771bca302ef594b8ca998bf4f0bdad977cc6535cdf3cfa4cf08199024defecf53c21a52ba7d06c52057d229ace8425d355091aba6d1bfe5d60fb270ee48600a4aad430059117022184d952414ecc71c8fb535d2ebcd22ef15b83c761ecc0b05bfc773487740de224b24c8723f1eac96aaadf9c61faf9588e0145370a8eb5984dd20b371fc25ccecb62c2b97b6600910731ea4427361f47fc650e0318d01b0cbe5df95b3682743776162ffd1acfd3aaf2a66763814824c0aa8da130d8c269b7c3ff552cf46ca21ed5ac54df34ccbbdf952a4459a9b87fe8f198f37b2ba3f91a119cccfd2b91f77aa78d2fbd4a87f2546f764c1b5f039cd3891c949f0bcd381d07b824a0f39e4e4f9f9c2a732a866812c03ac027d8d2ef5be154a44aa09ecb955a0c74f3a50a4164ac556c5b949c0659f4cc332ce8dbbbd0795a1386ea63eee329634c46eee3bc07771583955f4e57cf930e307be2adb7ed803c6ce06c74dae9aee22e9e2a6cc0ad6a4cea73e8c3f44415674b77837397abf51f69f94367cf73b6b4f4047139019ca5d8947a4a2e663f36a321ba3310b8b091f0c54fac112915b6443bba68eefc3b17a7e23a071e4d13d6cf2be7af47b9ec97797cd51a4267138734197454be590b8a64fe5ce412fbde5b6d16128db5a558fd02ca157e2abd1293a3e42c6843453af68e50de6b68186ca15316363b9d7d777ebe4ca85f433e78ba4714a3893f74a6c9b38627c7fc38b2b192a7941fa5049e0f46c1cd000c4a06ec3caf70913388a4c69fb86702753a20de61ecef6129e2089ba6e9634706e04c2cdfdb2aaa6a95d862dc6e0c6f2f0f1ed7f95f8590692204e4dfa8d577a9da0a47afd861a9d74270b71ac860d12921716674368b7d66670c9debe5edc0d8afe981370d771e54e0e9b0724204c7c88ca3c9bfceae728c375ffca1ffa35fd12f0bea0bd3264b4a12da346a42079d7882"}]})
    r12 = openat$selinux_context(0xffffffffffffff9c, &(0x7f0000001780), 0x2, 0x0)
    r13 = pidfd_getfd(r0, r0, 0x0)
    ioctl$BLKRESETZONE(r13, 0x40101283, &(0x7f00000017c0)={0x3, 0x8000})
    ioctl$AUTOFS_DEV_IOCTL_PROTOVER(r0, 0xc0189372, &(0x7f0000001800)={{0x1, 0x1, 0x18, <r14=>r2, {0xffffff81}}, './file0\x00'})
    mount$cgroup2(0x0, &(0x7f0000001840)='./file0\x00', &(0x7f0000001880), 0x1000000, &(0x7f00000018c0)={[{@memory_recursiveprot}, {@memory_localevents}, {@memory_localevents}, {@memory_localevents}, {}, {}], [{@smackfsroot={'smackfsroot', 0x3d, '/selinux/mls\x00'}}, {@dont_measure}]})
    close_range(r5, 0xffffffffffffffff, 0x2)
    ioctl$F2FS_IOC_GET_PIN_FILE(0xffffffffffffffff, 0x8004f50e, &(0x7f0000001980))
    fsetxattr(0xffffffffffffffff, &(0x7f00000019c0)=@random={'os2.', 'smackfsroot'}, &(0x7f0000001a00)='&-(\x00', 0x4, 0x3)
    ioctl$AUTOFS_DEV_IOCTL_FAIL(r11, 0xc0189377, &(0x7f0000001a40)={{0x1, 0x1, 0x18, <r15=>r9, {0xffffffc1, 0x101}}, './file0/file0\x00'})
    r16 = openat$vcsa(0xffffffffffffff9c, &(0x7f0000001a80), 0x100, 0x0)
    fanotify_mark(r15, 0x2, 0x10, r16, &(0x7f0000001ac0)='./file0\x00')
    r17 = openat$cgroup(r11, &(0x7f0000001b00)='syz1\x00', 0x200002, 0x0)
    ioctl$UI_BEGIN_FF_ERASE(r5, 0xc00c55ca, &(0x7f0000001b40)={0x2, 0x3, 0x6})
    fstat(r15, &(0x7f0000001b80)={0x0, 0x0, 0x0, 0x0, <r18=>0x0, <r19=>0x0})
    r20 = openat$pktcdvd(0xffffffffffffff9c, &(0x7f0000001c00), 0x101041, 0x0)
    ioctl$BTRFS_IOC_QUOTA_RESCAN_STATUS(r15, 0x8040942d, &(0x7f0000001c40))
    ioctl$FS_IOC_RESVSP(r1, 0x40305828, &(0x7f0000001c80)={0x0, 0x0, 0x8, 0x7})
    r21 = openat$vfio(0xffffffffffffff9c, &(0x7f0000001cc0), 0x2200, 0x0)
    sendfile(r21, r12, &(0x7f0000001d00), 0x0)
    ioctl$SG_BLKTRACETEARDOWN(r5, 0x1276, 0x0)
    fstatfs(r9, &(0x7f0000001d40)=""/131)
    flock(r14, 0x2)
    ioctl$BTRFS_IOC_DEV_REPLACE(r12, 0xca289435, &(0x7f0000001e00)={0x2, 0xe5, @status={[0x101, 0x6, 0x5, 0x9, 0xfff, 0x4]}, [0x8, 0x8, 0x80000000, 0xc76b, 0x100000001, 0x6, 0x682, 0x63, 0x8, 0x9, 0x9, 0x1ff, 0x2, 0x4, 0x8, 0x9, 0x1, 0x0, 0x1ff, 0x7, 0x9, 0x41, 0x9, 0xb56, 0x9, 0xbef, 0x0, 0x8001, 0x9, 0x1000, 0x0, 0x9, 0x7, 0x8, 0x9, 0xe3c0, 0xfffffffffffffff7, 0xde16, 0x8, 0x0, 0x4, 0x4, 0x3, 0x1, 0xffffffffffffffff, 0x2, 0x100000001, 0x2c, 0x1, 0x3, 0x0, 0x10000, 0x3, 0x38, 0x7, 0x7, 0x9, 0x8, 0x9, 0xffffffff80000000, 0x723d, 0x6]})
    r22 = openat$dma_heap(0xffffffffffffff9c, &(0x7f0000002840), 0x319000, 0x0)
    ioctl$BTRFS_IOC_SNAP_CREATE(r21, 0x50009401, &(0x7f0000002880)={{r22}, "74df77d487e0efb18ec56706f049a2b0c9c03a374a114901b9971968a40ee65fe4ff52a58fb899f995615bb9437f1527beff6021db8a6c745ab94940578712f30d63984464941e3861592a75e07276cc715beb806eca510324430b6ce32278c1e3729e89dd4ab2ce816e3bb7075f31ced081dd4e422db3b1df537738b6497965763628ec69a60674043cfc1b9299e5437bceeb76539101b39ab74e269234bc1c2249cb9b6f59347e42a90d9e264eaab40ee5f6606854dcf2cb3eddbf09ce42c24eeb0345543b7f4aec823a76d31cb36a36937de0dea5d279cedc4b092c386bec10c4143c094a65aeefcb9f244e2cd6374af3cb4a3fa6e3f78179ef50aef40246b043a7d9249bf4a2bb6aa537281002b4e8f2a880376432cf58756e3ffa68b1876c08b9fdb50d79283b7735f7d26fd8ebd8ab660df0fc451a63aac0df4a228d760f708696d4abf3c8eb2f3df9a7f31d38a841dc2662ba43270422419d675da204e3e44a98606fe62aa10e0cb177ad48acff27f810a94d18e28585fd57d6db430cd5ac996f8d4fd1c0451994f98d16284950461b86acaf50bfdc6101e8da8a07099ce49b6a19db3de38d315cc9d27de1473470a507f536bf96d5e854a748ce2bc4735f1f47cf129eb7a77d42bbf6bbd48e2e7cb35e6590eee2651b41b183d00ae83f4c67ff7a3a9af555ebef48754efaf114c1b02dd7ff917370e9fc6b726fc71fc13d519c508892db5ddb4b27921bfe33bf72651b8ffea267cdc975a99f41a17c71ecbbd4af90da419bde8c49ad490a2a88a1836cdb16dadc96c1f27e3355bbfb94da08982d6f540df41c81975e1630e31616190347015403707adabd4263b3921820156306526bd1ec6c2831e60a0e6cc51d1c95aaaa709fc1ba76b1bf5d0d2cab52cc87c94347ae44e13f40d6ec02010c35ce75fafac42c798f59ec819f0936ade98b801bcbe7b21807d6e15b51577864876409356bde6226999e40ec4b3ec5bad3fa8351a8054104109f5c06799dfb4651ad627482c47e0a1fb6f8056f214ac556691d046c05f31f6136d44a691486b62ac2bb907cee34ada411827d326b1c3bdf5d20860b7dd31d7d8823b91d23d605b89ebaf12c7a5d193b9b54f5d4dd4c1fc2b8b9570f488fe9073f49c653abd810eecc997a8109a91de7e8b98118351aef9cde8cb740c5caa736e541aa098c06ef6025483f1167d686e36c2f8c605de127bef03d9f3ca3106d16b102ce44179c84c0a04dc83921c4465bac239a98917b747f1ad1a223d2ab16f7b149219848e5d71260bc840e5daa5804c08a636463127293cca320edc2af5cb68801df9fd8b277890fed3a5ef3ecd45aa43e3ffe553a5cae20c4791b957f70334a15ee34f73eecb2e4a3d06a8b8ac7026876384a8672a8e0c03bb2454309979f19430f2db87191e1dd15cc552be2542049115e5272b06a5295db96ce5f7c66096de4ff5e799f8ce1e7535746a153bb7224d0f1e5e2973525ff6dcc8fe422db5cf69110ca630bc8d2211d4a45f1adaa2bb8c219d13beb2477ee46d415775f940d305d077b0a4adba0c9b58c49b435b3a4a258b072294e4341fd75d1d1ecc5b2d10bfe4b8a931c538cd9803b41a371d9e3c0514470c188883f954fdda37abca2d3a4596da4c31a1cf8ebf6e87c00696d0975e59edc93ddbfe0ab8d2cd91b51c6d99c1ea1934879298c9714a475f1332a3674cb659331f0d28ded163fa630e058fd8c8a484d6a1a577943cb6fe9bb563ae6e50a72fc60a390b839ade71718cf6b06423e623a4b93a87756bb357672bc99f18b9ae31963655a1c61f9e1b40744e92b02a562b5cad638ce5bca4674308515054a5a2f3e243473876439fb5c87a6645bce2b4a8fe028e8e38874f0910dee429254fb003114286232ef25895039100ee0ff032c2e57f216e6327fc2c2492503a68fad907297462938c14fc00a1cd1b23bc75c75d4492f07e2fbeea24c2e05004d2600d228d7b3df0c45308f83d08c78192c86a4f048d27755d90fd1e74c98a48a8c2572a3bfb2b6fb85a337ad47a79f524a20ef2be2833de529ea9e93afa2353209287efe746f3719c9365047865f49db3842107c2fb6b073dab54f616a773b05212623dac8df99724ae6ef4195d24dec2f06a2bbe138b5f075bdbc44c270d42c91a5a3c54dbe26741a77a8f826ce7322f36a39bbcbdaef6a0a9c57d4a0dc9e8d6a41a3b7f027855993d4e6dbf4a1b95f1a388e267c11e6572b35a386e74fa8ccd10675d0018909a81e3aba349ba48d5bd56329b9a3a6093f47bc7ab5328e3158f15d559edfc144acc2c84d223eb02f718a8f0c0a122427118b1895625dd843d70cb941576b7e540750c87e866f0a62f7de36cb3634a0860029907680317bb731db4657e0db6d49669bae244946e1a5d4e893a17ded9a5380b011ecebe2fa4c42abfa93197a26c43a5afbbb8a596174403054b3121ac8fb119a0578f3b86042092d7cb2d105b8f646c3b758c91d0d4adf5e8fcbe6f4e5d885683263efaa02dbc3d1cee23698a119617ea297d896884a714b3239eb3e774abd89beed992c5005e7ed58fd6bab9d68dc804a4388b4f9da2a18b28e0ed247a0c82503bea1322e889e698d34efca0cca2004124cbc565355eb0b3441c163aa005f6354b02cf1349618f6b97d8c1ffdc26b38297082f91bb933fd531320751890a076093d7ea854b679ccb9cd313e85ab8b4a87751d08c1341202564821805361dc7b40598b74b19cb6f3df68e05e96d27626b3e0af913f5bb6350875afd2f2537ce55e716e6e4a873582780639c317d314d0711b09bd9536149e20cedd488109bf29fc47ea9059af9a41037d8ac6fad541412988487f31796f90d16cfba37d0dd056949fa561c297d705d1328a99abe05d299fdfaa974a5525e4c0f9c5b9d686424b1c36467accba549aca31ae5a769014f4d90673f8752c3d2afdd42d225c114292cfabaeffeffefb513b8d57ee6bbaeaaf14445473801ab8a50c49162780fd40dec0570f2d2b80e791aef98e7f8a1f1ad70c4cd6fb645689832f7448670e67cd0890fe105e98a92475a00b0347fa7124755ce4311f61b3d0b25ae04a4224ddd7c0ee0e5d6cb08d061878274aaa89fc80163575f430020c721a1c990b8668d61a32a168123ec2984c138aa65471179e16453280af1f9dbb11c48e82195deb285f2d66d478e2016d42cddf773a5b9c895987a783b6f4f2518dd10c39b12b4cfbfed1b8aabac4366255b6e8b14868ede3a97e7adee58607967f6ba64e1f25ec2b916e168bc2a4e38d002078d974ec7d10e13e06a78fa8ef0deba84ddc4c7755872b196cccbfd43b7f59badccfdae518a3d4faeb99c9bb4acf5d8b1d21813cc6f1221d1204ef574f44ec868742e2b841e17be27598ed97789dcb1aca993805ca48a8d8c4a67827004c44e1819b20e6cfebe8174b19b30938c2c1d604afdd4be8c3972734fe1760a8001d5ee64aaa560d44e1e90daa65b1c29c993a5a398af5abe960d4c7626629c61b022b97a9b80949d40e43a323622ef8f695a027ab1b6c01b8151f83b12804d51787cb92c0c19f097773e6a383dc3a161370981601f78dbd79059ae033f86891a9446d39ec9848fdc221b2b1683bf91fec975d2b97843bbc6ac807efe94d2f0d3c18f871df1237f901fc038a1f4957daf82b21ef5522f6f53d3ade85f038e957894eee26d8a3464c97c954a2b54cf04dab102a27d87357f3e8f9699990fb8b5221d42ebda7bb2c1c231ac2fcbc9b3aad7529ad804dfcb83f842c9818bb13ba51cc37954b79f4af96de3c82df379fee533e7641b6aebd7df7127db7c3f918a3083414fd635b9d4805d272ab6c8972987b60a529a7fd722a544d23a2401cd89031a92d4734e8b91514c5c06d219d5eca7f4f9e32a0e010c77116dc9f41b381fad81c6b36ab2af216de262d7204b5d18ab7a6c9e2abdc02e57ce03762f400f08dc81c2ce8bf61ac818a3e8f3b0fd563fa7403c460064573011e17f5c21ac4a6fe926c00220480089cdad2b0fb889f09ebeb1f01a77b8b3dfe60582e62d702014ac41dc50b9829b1be7f4fc4e1613ce52b3ae55b5b28d703ef414b6881ff5c798353a261f8dc6f40613a9c6673477f170222d28d7a7dddcda758ad92304dbe607fc7ae96a286f43a7ac920c23340a76302c73d75857297f04a36c1f8a84f9e0bdb6eacf953119094d620cb0c69518db5637265bf311fe597633927f084b05ce0c1dc67e72d6c89b5b30bd02f983b2eb4502dfcc3f7e16cee845dbd9e84139ac45f17c4311fcdc1c6e0b2592e28b8f4bdbabc99f36fe58876dad9754e62701c15798ad504b02d5533102a5eca450d59b1b229615786224a56576beb12569d75fe2a16772f5b0abf4ebcd52d5b98e67df39ef5270c28ce20ed0a54eff233acb1a3de0273ea94673c2a4775d43fd2e5b43c185e35b5dc0c90b76f3885e9a51459a476d57e529448c12001b263fd1ae93b55986feb008d8f61abfccb02a4555d95af253f3bcbfe5a8f15c6a954b01c5096163b78f3cf85f7a9ea30b703ff48237990bb8c162b9b5d15de3de1ff169a5d9b8f59867ed7202c84e788be1934a6f94c18078546937dd8da12b48754eaec404b968e370614677b6841f437386848534ee9061c65920f71d61b8563993f7ebcb389189d8e05e7ce8b3fbef23011f221d197c5efd09757cb83d17bad4b21bbebe3b490d930ffacae9f80885d4d000b62e5eae25c56a88b6fdb606b1c2b5834f4fc407e7ab4a9d03b1a86c0d2b2cd9d4c6877cbba615eb853c036cb308262f6e87d3b5a85cc5695706a22a628ad3634d2d64104348b1d753eb91241adc7c0f87b7854197a99df59dabbc33e18a72518df8f39e804700e040d97fd85cf7bfd2cf566809a4541e2468a978d751fd8b7881f52e1db583dff68aea9686a7f7ce4608409a5cba0e872547361051f894a8015f6d61ee3fce876b4906ce52224258d7c25114f3bda12c1105ba6826a621a8537aba1cb5e3ccda89b5cb5e30b15f96ba9f0b751b04391ea04a7622052475b418e1cc88ef7c496e2177763615f89ff99627fd6350a2baf02b63c63e60ba39dd0d117cca491f8c05d6b3663528c588de63b3a163518df204e8af85e60bd29bbb06cceb95b5742abd4779202fa391a01da859109223465aed14441c99b10417fe4d313fef7c6308a7c83eba47a4d04f2f2438e84886b668b123fc095107e2dc01e5e6d7aa26a90aaf5e9192977996c743f0b95d22d2f0f1199dc25fba15175c1afaade9845830e2fd54a83b4de02495eaa9d3c5c6a40100e1a0cf96618a0ef6822f9ba297f7a4af37cba159362b78287646a504c6e8ecffd7506153b69b124c87bea11d4368f76a6111ba0706d2bbf1336d5cec2dd43520b52a04447e59a0e14418f228173dec76e2c481a859f01525ef59e2aac2215769cb70e9b5feb2079bea593af98deba50be2ddb914e176e608d8a101dad5adb970c28d8d4785889c4c1d6398a231c527d722fc06014e4c35669396d1f58d1f9e54892a302d1c0ed692bf3fc5729cf1205fc059cc71344495b633ad0b6c78fd1f62f372b236b2cd89a05fe55297867e4c20f17821b20002daa85f4135aaef81c1f397b3004a0e94a4b39118c730c9de8906ecaec9d5bc7d2d3d7b897271af3ca4f65c546b5a55b06b080927c737e619fcb2d8458c50ca8c668b0609d710f43fa13294d5776d9d27b5fa89f44f98a3473ffdd01839d12c1382c22f57051926f7cd059ed9bb32ee58c04769b5527af9d3681"})
    ioctl$SG_GET_TIMEOUT(r13, 0x2202, 0x0)
    ioctl$AUTOFS_DEV_IOCTL_CLOSEMOUNT(r0, 0xc0189375, &(0x7f0000003880)={{0x1, 0x1, 0x18, <r23=>r0}, './file0/file0\x00'})
    ioctl$EVIOCSABS3F(r23, 0x401845ff, &(0x7f00000038c0)={0x10000, 0x4, 0x3, 0x432d, 0xda})
    ioctl$AUTOFS_DEV_IOCTL_OPENMOUNT(r20, 0xc0189374, &(0x7f0000003900)={{0x1, 0x1, 0x18, <r24=>r10, {0x2}}, './file0/file0\x00'})
    ioctl$TIPC_IOC_CONNECT_hwkey(r13, 0x40087280, &(0x7f0000003940))
    r25 = openat$selinux_user(0xffffffffffffff9c, &(0x7f0000003980), 0x2, 0x0)
    ioctl$F2FS_IOC_GARBAGE_COLLECT(r25, 0x4004f506, &(0x7f00000039c0))
    ioctl$AUTOFS_DEV_IOCTL_EXPIRE(r24, 0xc018937c, &(0x7f0000003a00)={{0x1, 0x1, 0x18, <r26=>r14, {0x1}}, './file0/file0\x00'})
    write$cgroup_pressure(r5, &(0x7f0000003a40)={'some', 0x20, 0x6, 0x20, 0x3}, 0x2f)
    r27 = openat$ubi_ctrl(0xffffffffffffff9c, &(0x7f0000003a80), 0x40000, 0x0)
    write$yama_ptrace_scope(r27, &(0x7f0000003ac0)='3\x00', 0x2)
    r28 = ioctl$NS_GET_PARENT(0xffffffffffffffff, 0xb702, 0x0)
    ioctl$FICLONE(r28, 0x40049409, r10)
    r29 = fsmount(r13, 0x1, 0x4)
    fsconfig$FSCONFIG_CMD_CREATE(r29, 0x6, 0x0, 0x0, 0x0)
    ioctl$UI_SET_PROPBIT(r5, 0x4004556e, 0x1f)
    r30 = openat$selinux_relabel(0xffffffffffffff9c, &(0x7f0000003b00), 0x2, 0x0)
    fcntl$F_SET_RW_HINT(r30, 0x40c, &(0x7f0000003b40)=0x1)
    setxattr$trusted_overlay_origin(&(0x7f0000003b80)='./file0\x00', &(0x7f0000003bc0), &(0x7f0000003c00), 0x2, 0x2)
    symlink(&(0x7f0000003c40)='./file0\x00', &(0x7f0000003c80)='./file0/file0\x00')
    r31 = openat$procfs(0xffffffffffffff9c, &(0x7f0000003cc0)='/proc/bus/input/handlers\x00', 0x0, 0x0)
    ioctl$BTRFS_IOC_QGROUP_LIMIT(r31, 0x8030942b, &(0x7f0000003d00)={0x71a, {0x4, 0x20, 0x7, 0xef, 0x9}})
    ioctl$SG_SET_RESERVED_SIZE(r29, 0x2275, &(0x7f0000003d40)=0x3)
    r32 = syz_open_procfs(0xffffffffffffffff, &(0x7f0000003d80)='net/ip6_tables_matches\x00')
    ioctl$EVIOCGMTSLOTS(r32, 0x8040450a, &(0x7f0000003dc0)=""/60)
    ioctl$BTRFS_IOC_QUOTA_RESCAN(r13, 0x4040942c, &(0x7f0000003e00)={0x0, 0x3ff, [0x10000, 0x0, 0xc0, 0xf0ae, 0x101, 0x2]})
    r33 = open_tree(r5, &(0x7f0000003e40)='./file0/file0\x00', 0x800)
    name_to_handle_at(r33, &(0x7f0000003e80)='./file0/file0\x00', &(0x7f0000003ec0)=@reiserfs_5={0x14, 0x5, {0x2, 0x3, 0x10001, 0x8001, 0x3}}, &(0x7f0000003f00), 0xc00)
    ioctl$FS_IOC_MEASURE_VERITY(r29, 0xc0046686, &(0x7f0000003f40)={0x0, 0x86, "551331bbdbc9f4237a0f29b5522dd75e3f331256c0559a6ab2068835d04ff5364e6b57d63e1df69178441c5ec8ce3143de0e575ea34d79e09ede7b43ef2815e24396c6aeb117fc2a997dd76a84f88b5855947981d492b05ab000eacf8cfe00ab6b9377c0b4838f517cea03c9b1445c393cdf6c1397897763594a91a5643ba8ef3b0c8edf313c"})
    ioctl$AUTOFS_IOC_CATATONIC(r8, 0x9362, 0x0)
    r34 = openat$selinux_checkreqprot(0xffffffffffffff9c, &(0x7f0000004000), 0x10000, 0x0)
    ioctl$SNAPSHOT_PREF_IMAGE_SIZE(r24, 0x3312, 0x8)
    ioctl$AUTOFS_DEV_IOCTL_ASKUMOUNT(r34, 0xc018937d, &(0x7f0000004040)={{0x1, 0x1, 0x18, <r35=>r31, {0x4}}, './file0/file0\x00'})
    ioctl$EXT4_IOC_GETSTATE(r35, 0x40046629, &(0x7f0000004080))
    ioctl$BTRFS_IOC_LOGICAL_INO_V2(r30, 0xc038943b, &(0x7f0000004100)={0x5, 0x30, '\x00', 0x1, &(0x7f00000040c0)=[0x0, 0x0, 0x0, 0x0, 0x0, 0x0]})
    r36 = openat$bsg(0xffffffffffffff9c, &(0x7f0000004140), 0x60000, 0x0)
    ioctl$EVIOCSKEYCODE(r36, 0x40084504, &(0x7f0000004180)=[0x8, 0x7fffffff])
    stat(&(0x7f0000004200)='./file0/file0\x00', &(0x7f0000004240)={0x0, 0x0, 0x0, 0x0, <r37=>0x0, <r38=>0x0})
    lstat(&(0x7f00000042c0)='./file0/file0\x00', &(0x7f0000004300)={0x0, 0x0, 0x0, 0x0, <r39=>0x0, <r40=>0x0})
    stat(&(0x7f0000004380)='./file0/file0\x00', &(0x7f00000043c0)={0x0, 0x0, 0x0, 0x0, <r41=>0x0, <r42=>0x0})
    stat(&(0x7f0000004440)='./file0/file0\x00', &(0x7f0000004480)={0x0, 0x0, 0x0, 0x0, <r43=>0x0, <r44=>0x0})
    ioctl$AUTOFS_DEV_IOCTL_REQUESTER(r13, 0xc018937b, &(0x7f0000004500)={{0x1, 0x1, 0x18, <r45=>r28, {<r46=>r18, <r47=>0xee00}}, './file0\x00'})
    fsetxattr$system_posix_acl(r12, &(0x7f00000041c0)='system.posix_acl_access\x00', &(0x7f0000004540)={{}, {0x1, 0x4}, [{0x2, 0x3, r18}], {0x4, 0x3}, [{0x8, 0x2, r4}, {0x8, 0x0, r38}, {0x8, 0x5, r4}, {0x8, 0x3, r40}, {0x8, 0x7, r4}, {0x8, 0x5, r42}, {0x8, 0x6, r44}, {0x8, 0x4, r47}, {0x8, 0x0, r4}], {0x10, 0x4}, {0x20, 0x1}}, 0x74, 0x3)
    r48 = syz_mount_image$fuse(&(0x7f00000045c0), &(0x7f0000004600)='./file0\x00', 0x8, &(0x7f0000004640)={{'fd', 0x3d, r11}, 0x2c, {'rootmode', 0x3d, 0xa000}, 0x2c, {'user_id', 0x3d, r43}, 0x2c, {'group_id', 0x3d, r19}, 0x2c, {[{@max_read={'max_read', 0x3d, 0x8}}, {@default_permissions}], [{@hash}, {@euid_eq={'euid', 0x3d, r6}}, {@euid_gt={'euid>', r18}}, {@hash}, {@fowner_lt={'fowner<', r3}}, {@uid_eq={'uid', 0x3d, r37}}, {@fowner_gt={'fowner>', r46}}]}}, 0x1, 0x0, &(0x7f0000004780)="94666b8ac714389e16fe1da5d09352b9a5ea229386202deaabd10e7e7b138449c404be92ad7619bad491b0972682e6fff9a3463886561dc63c1121b2b9c48b1b9edd685b79fef37288d2176a21a806b8187faa5ace9ed5fddc86d15e3cae5a538900a41abadfae634348a821f13bd9b9e9e8c78c6b3eeafc4b9ee2d4040504a5f418c6f61cc16a4a7715ffc6882fe2b60a45054247b79c6054f1893c97c62bdc839d3633834a0b96b5048869e4212c035966d7d2cf6b43")
    r49 = fspick(r48, &(0x7f0000004840)='./file0/file0\x00', 0x0)
    ioctl$F2FS_IOC_SET_PIN_FILE(r36, 0x4004f50d, &(0x7f0000004880)=0x1)
    r50 = fcntl$getown(r12, 0x9)
    r51 = openat$selinux_status(0xffffffffffffff9c, &(0x7f00000048c0), 0x0, 0x0)
    

​ The whole test input is attached.

The full output of the command that failed:
In client node, run the command mkdir testfile, the output is same:

mkdir: cannot create directory ‘testfile’: Read-only file system

Expected results:
The /mnt/gluster-test should be avaliable for write operation, such as mkdir, touch, etc.

Mandatory info:
- The output of the gluster volume info command:
In gluster1, the output is:

root@gluster1:~# gluster volume info
 
Volume Name: gv0
Type: Replicate
Volume ID: d0a6cfeb-706b-4e43-96c6-a7480bdc4803
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.102.186:/data/brick1/gv0
Brick2: 192.168.103.61:/data/brick1/gv0
Brick3: 192.168.102.34:/data/brick1/gv0
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off

In gluster2, the output is:

root@gluster2:~# gluster volume info
 
Volume Name: gv0
Type: Replicate
Volume ID: d0a6cfeb-706b-4e43-96c6-a7480bdc4803
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.102.186:/data/brick1/gv0
Brick2: 192.168.103.61:/data/brick1/gv0
Brick3: 192.168.102.34:/data/brick1/gv0
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on

In gluster3, the output is:

root@gluster3:/data/brick1/gv0# gluster volume info
 
Volume Name: gv0
Type: Replicate
Volume ID: d0a6cfeb-706b-4e43-96c6-a7480bdc4803
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.102.186:/data/brick1/gv0
Brick2: 192.168.103.61:/data/brick1/gv0
Brick3: 192.168.102.34:/data/brick1/gv0
Options Reconfigured:
performance.client-io-threads: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on

- The output of the gluster volume status command:

In gluster1, the output is:

root@gluster1:~# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.102.186:/data/brick1/gv0      59335     0          Y       22203
Brick 192.168.103.61:/data/brick1/gv0       55769     0          Y       15770
Brick 192.168.102.34:/data/brick1/gv0       59067     0          Y       2158222
Self-heal Daemon on localhost               N/A       N/A        Y       2158243
Self-heal Daemon on 192.168.103.61          N/A       N/A        Y       63928
Self-heal Daemon on 192.168.102.186         N/A       N/A        Y       116353
 
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

In gluster2, the output is:

root@gluster2:~# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.102.186:/data/brick1/gv0      59335     0          Y       22203
Brick 192.168.103.61:/data/brick1/gv0       55769     0          Y       15770
Brick 192.168.102.34:/data/brick1/gv0       59067     0          Y       2158222
Self-heal Daemon on localhost               N/A       N/A        Y       116353
Self-heal Daemon on 192.168.103.61          N/A       N/A        Y       63928
Self-heal Daemon on 192.168.102.34          N/A       N/A        Y       2158243
 
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

In gluster3, the output is:

root@gluster3:~# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.102.186:/data/brick1/gv0      59335     0          Y       22203
Brick 192.168.103.61:/data/brick1/gv0       55769     0          Y       15770
Brick 192.168.102.34:/data/brick1/gv0       59067     0          Y       2158222
Self-heal Daemon on localhost               N/A       N/A        Y       63928
Self-heal Daemon on 192.168.102.186         N/A       N/A        Y       116353
Self-heal Daemon on 192.168.102.34          N/A       N/A        Y       2158243
 
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

- The output of the gluster volume heal command:

The self-healing process is normal

Launching heal operation to perform index self heal on volume gv0 has been successful 

- Provide logs present on following locations of client and server nodes -

/var/log/glusterfs/glusterd.log

In gluster1:

[2023-06-02 03:02:02.098191 +0000] I [MSGID: 100030] [glusterfsd.c:2947:main] 0-/usr/local/sbin/glusterd: Started running version [{arg=/usr/local/sbin/glusterd}, {version=12dev}, {cmdlinestr=/usr/local/sbin/glusterd -p /var/run/glusterd.pid --log-level 
INFO}] 
[2023-06-02 03:02:02.100236 +0000] I [glusterfsd.c:2637:daemonize] 0-glusterfs: Pid of current running process is 2158188
[2023-06-02 03:02:02.111281 +0000] I [MSGID: 0] [glusterfsd.c:1671:volfile_init] 0-glusterfsd-mgmt: volume not found, continuing with init 
[2023-06-02 03:02:02.326247 +0000] I [MSGID: 106479] [glusterd.c:1660:init] 0-management: Using /var/lib/glusterd as working directory 
[2023-06-02 03:02:02.326363 +0000] I [MSGID: 106479] [glusterd.c:1664:init] 0-management: Using /var/run/gluster as pid file working directory 
[2023-06-02 03:02:02.330006 +0000] I [socket.c:973:__socket_server_bind] 0-socket.management: process started listening on port (24007)
[2023-06-02 03:02:02.332307 +0000] I [socket.c:916:__socket_server_bind] 0-socket.management: closing (AF_UNIX) reuse check socket 12
[2023-06-02 03:02:02.335952 +0000] I [MSGID: 106059] [glusterd.c:1923:init] 0-management: max-port override: 60999 
[2023-06-02 03:02:02.396848 +0000] E [MSGID: 106061] [glusterd.c:597:glusterd_crt_georep_folders] 0-glusterd: Dict get failed [{Key=log-group}, {errno=2}, {error=No such file or directory}] 
[2023-06-02 03:02:03.497200 +0000] I [MSGID: 106513] [glusterd-store.c:2177:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 110000 
[2023-06-02 03:02:03.653600 +0000] W [MSGID: 106204] [glusterd-store.c:3247:glusterd_store_update_volinfo] 0-management: Unknown key: tier-enabled 
[2023-06-02 03:02:03.653813 +0000] W [MSGID: 106204] [glusterd-store.c:3247:glusterd_store_update_volinfo] 0-management: Unknown key: brick-0 
[2023-06-02 03:02:03.653844 +0000] W [MSGID: 106204] [glusterd-store.c:3247:glusterd_store_update_volinfo] 0-management: Unknown key: brick-1 
[2023-06-02 03:02:03.653872 +0000] W [MSGID: 106204] [glusterd-store.c:3247:glusterd_store_update_volinfo] 0-management: Unknown key: brick-2 
[2023-06-02 03:02:04.010750 +0000] I [MSGID: 106544] [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:04.178602 +0000] I [MSGID: 106498] [glusterd-handler.c:3780:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 
[2023-06-02 03:02:04.195177 +0000] I [MSGID: 106498] [glusterd-handler.c:3780:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 
[2023-06-02 03:02:04.195295 +0000] W [MSGID: 106061] [glusterd-handler.c:3575:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout 
[2023-06-02 03:02:04.195438 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2023-06-02 03:02:04.196225 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.listen-backlog 1024
  8:     option max-port 60999
  9:     option event-threads 1
 10:     option ping-timeout 0
 11:     option transport.socket.listen-port 24007
 12:     option transport.socket.read-fail-log off
 13:     option transport.socket.keepalive-interval 2
 14:     option transport.socket.keepalive-time 10
 15:     option transport-type socket
 16:     option working-directory /var/lib/glusterd
 17: end-volume
 18:  
+------------------------------------------------------------------------------+
[2023-06-02 03:02:04.196213 +0000] W [MSGID: 106061] [glusterd-handler.c:3575:glusterd_transport_inet_options_build] 0-glusterd: Failed to get tcp-user-timeout 
[2023-06-02 03:02:04.214817 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] 
[2023-06-02 03:02:04.217494 +0000] I [MSGID: 106163] [glusterd-handshake.c:1493:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 110000 
[2023-06-02 03:02:04.472212 +0000] I [MSGID: 106490] [glusterd-handler.c:2677:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 56bbd9b8-00de-4d13-93ec-fb8ccdb5329e 
[2023-06-02 03:02:04.606877 +0000] I [MSGID: 106493] [glusterd-handler.c:3968:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 192.168.103.61 (0), ret: 0, op_ret: 0 
[2023-06-02 03:02:04.765565 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:454:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 56bbd9b8-00de-4d13-93ec-fb8ccdb5329e, host: 192.168.103.61, port: 0 
[2023-06-02 03:02:04.851115 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 56bbd9b8-00de-4d13-93ec-fb8ccdb5329e 
[2023-06-02 03:02:04.940878 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:04.941214 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 56bbd9b8-00de-4d13-93ec-fb8ccdb5329e 
[2023-06-02 03:02:05.008033 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:05.008243 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 56bbd9b8-00de-4d13-93ec-fb8ccdb5329e 
[2023-06-02 03:02:05.008745 +0000] I [glusterd-utils.c:6446:glusterd_brick_start] 0-management: starting a fresh brick process for brick /data/brick1/gv0
[2023-06-02 03:02:05.008470 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 56bbd9b8-00de-4d13-93ec-fb8ccdb5329e 
[2023-06-02 03:02:05.043589 +0000] I [MSGID: 106496] [glusterd-handshake.c:922:__server_getspec] 0-management: Received mount request for volume gv0.192.168.102.34.data-brick1-gv0 
[2023-06-02 03:02:05.234818 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:454:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: accb1df0-1177-4c99-beeb-056f7bce8042, host: 192.168.102.186, port: 0 
[2023-06-02 03:02:05.343966 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: accb1df0-1177-4c99-beeb-056f7bce8042 
[2023-06-02 03:02:05.477111 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:05.619236 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: accb1df0-1177-4c99-beeb-056f7bce8042 
[2023-06-02 03:02:05.619861 +0000] I [MSGID: 106163] [glusterd-handshake.c:1493:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 110000 
[2023-06-02 03:02:05.735438 +0000] I [MSGID: 106490] [glusterd-handler.c:2677:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: accb1df0-1177-4c99-beeb-056f7bce8042 
[2023-06-02 03:02:05.852482 +0000] I [MSGID: 106493] [glusterd-handler.c:3968:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 192.168.102.186 (0), ret: 0, op_ret: 0 
[2023-06-02 03:02:06.194385 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: accb1df0-1177-4c99-beeb-056f7bce8042 
[2023-06-02 03:02:06.418386 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:06.418639 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: accb1df0-1177-4c99-beeb-056f7bce8042 
[2023-06-02 03:02:06.648935 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2023-06-02 03:02:07.043134 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600
[2023-06-02 03:02:07.043298 +0000] I [MSGID: 106131] [glusterd-proc-mgmt.c:81:glusterd_proc_stop] 0-management: quotad already stopped 
[2023-06-02 03:02:07.043366 +0000] I [MSGID: 106568] [glusterd-svc-mgmt.c:262:glusterd_svc_stop] 0-management: quotad service is stopped 
[2023-06-02 03:02:07.043412 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600
[2023-06-02 03:02:07.043551 +0000] I [MSGID: 106131] [glusterd-proc-mgmt.c:81:glusterd_proc_stop] 0-management: bitd already stopped 
[2023-06-02 03:02:07.043591 +0000] I [MSGID: 106568] [glusterd-svc-mgmt.c:262:glusterd_svc_stop] 0-management: bitd service is stopped 
[2023-06-02 03:02:07.043632 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600
[2023-06-02 03:02:07.043775 +0000] I [MSGID: 106131] [glusterd-proc-mgmt.c:81:glusterd_proc_stop] 0-management: scrub already stopped 
[2023-06-02 03:02:07.043791 +0000] I [MSGID: 106568] [glusterd-svc-mgmt.c:262:glusterd_svc_stop] 0-management: scrub service is stopped 
[2023-06-02 03:02:07.043840 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600
[2023-06-02 03:02:07.044593 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600
[2023-06-02 03:02:07.076650 +0000] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600
[2023-06-02 03:02:08.089704 +0000] I [MSGID: 106618] [glusterd-svc-helper.c:931:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=gv0) to existing process with pid 2158243 
[2023-06-02 03:02:08.090212 +0000] I [MSGID: 106496] [glusterd-handshake.c:922:__server_getspec] 0-management: Received mount request for volume shd/gv0 
[2023-06-02 03:02:08.098970 +0000] I [MSGID: 106617] [glusterd-svc-helper.c:696:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume gv0 attached successfully to pid 2158243 
[2023-06-02 06:55:12.604269 +0000] I [MSGID: 106487] [glusterd-handler.c:1438:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req 

In gluster2:

[2023-06-02 03:02:04.216064 +0000] I [MSGID: 106163] [glusterd-handshake.c:1493:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 110000 
[2023-06-02 03:02:04.297512 +0000] I [MSGID: 106490] [glusterd-handler.c:2677:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:05.233287 +0000] I [MSGID: 106493] [glusterd-handler.c:3968:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 192.168.102.34 (0), ret: 0, op_ret: 0 
[2023-06-02 03:02:05.480541 +0000] I [MSGID: 106618] [glusterd-svc-helper.c:931:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=gv0) to existing process with pid 116353 
[2023-06-02 03:02:05.480851 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:05.480974 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:05.618153 +0000] I [MSGID: 106617] [glusterd-svc-helper.c:696:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume gv0 attached successfully to pid 116353 
[2023-06-02 03:02:05.618336 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:05.920516 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:454:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f, host: 192.168.102.34, port: 0 
[2023-06-02 03:02:06.089042 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:06.089147 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:06.417431 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:04:08.152353 +0000] I [MSGID: 106487] [glusterd-handler.c:1438:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req 
[2023-06-02 05:48:05.610346 +0000] I [MSGID: 106533] [glusterd-volume-ops.c:711:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume full 
[2023-06-02 05:48:05.610441 +0000] E [MSGID: 106265] [glusterd-volume-ops.c:755:__glusterd_handle_cli_heal_volume] 0-management: Volume full does not exist 
[2023-06-02 05:48:48.602587 +0000] I [MSGID: 106533] [glusterd-volume-ops.c:711:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume gv0 
[2023-06-02 06:55:27.051871 +0000] I [MSGID: 106487] [glusterd-handler.c:1438:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req 

In gluster3:

[2023-06-02 03:02:04.216441 +0000] I [MSGID: 106163] [glusterd-handshake.c:1493:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 110000 
[2023-06-02 03:02:04.407524 +0000] I [MSGID: 106490] [glusterd-handler.c:2677:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:04.492999 +0000] I [MSGID: 106493] [glusterd-handler.c:3968:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to 192.168.102.34 (0), ret: 0, op_ret: 0 
[2023-06-02 03:02:04.670588 +0000] I [MSGID: 106618] [glusterd-svc-helper.c:931:glusterd_attach_svc] 0-glusterd: adding svc glustershd (volume=gv0) to existing process with pid 63928 
[2023-06-02 03:02:04.670831 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:454:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f, host: 192.168.102.34, port: 0 
[2023-06-02 03:02:04.766021 +0000] I [MSGID: 106617] [glusterd-svc-helper.c:696:glusterd_svc_attach_cbk] 0-management: svc glustershd of volume gv0 attached successfully to pid 63928 
[2023-06-02 03:02:04.766123 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:04.852031 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:04.852321 +0000] I [MSGID: 106492] [glusterd-handler.c:2882:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:04.948880 +0000] I [MSGID: 106502] [glusterd-handler.c:2929:__glusterd_handle_friend_update] 0-management: Received my uuid as Friend 
[2023-06-02 03:02:04.949107 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 
[2023-06-02 03:02:05.007328 +0000] I [MSGID: 106493] [glusterd-rpc-ops.c:668:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 22990ccf-c51b-4f0f-b2a2-2926b268d37f 

- The operating system / glusterfs version:

root@gluster1:~# gluster --version
glusterfs 12dev
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions