Thursday, June 19, 2025

Impact of mdadm -c, --chunk on random read/write performance and disk space utilization.

No one knows exactly what this is in context of mdadm, but this must be the minimum i/o size of the RAID block device. Regardless, I did some random read/write tests using various chunk sizes using seekmark. mdadm RAID creation parameters -- 

mdadm -C /dev/md/test -l 5 --home-cluster=xxx --homehost=any -z 10G -p left-symmetric -x 0 -n 3 -c 512K|64K --data-offset=8K -N xxxx -k resync 

XFS format parameters -- 

mkfs.xfs -m rmapbt=0,reflink=0

Seekmark commands -- 

seekmark -i $((32*1024)) -t 1 -s 1000 -f /mnt/archive/test-write
seekmark -i $((64*1024)) -t 1 -s 1000 -f /mnt/archive/test-write
seekmark -i $((128*1024)) -t 1 -s 1000 -f /mnt/archive/test-write
seekmark -i $((256*1024)) -t 1 -s 1000 -f /mnt/archive/test-write

512K chunks -- 

seekmark 32K: 163.64
seekmark 64K: 153.89
seekmark 128K: 145.77
seekmark 256K: 130.16

64K chunks -- 

seekmark 32K: 145.33
seekmark 64K: 133.40
seekmark 128K: 121.04
seekmark 256K: 99.60

Unit is seeks/sec

Therefore, for some reason 512K chunks win even for small reads.

For 32K writes, I was getting Around 53 seeks/s write using 512K chunks and 49 seeks/s for 64K chunks, so here too large chunk size wins by a small margin (and maybe there no difference at all).

For the disk space utilization, large chunk size too wins when used with the same underlying xfs FS. For the test, 400000 4K sized files where created. At 4K chunk size 1.9G of space was used and at 16K chunk size, 1.8G space was used.

No comments:

Post a Comment