Tuesday, August 5, 2025

Integrating sccache-dist in portage/gentoo for rust distributed compiling.

 NOTE: it did not work out. The (remote) sccache-dist server error out with -- 

Missing output path "/tmp/portage/www-client/firefox-128.12.0/work/firefox_build/instrumented/x86_64-unknown-linux-gnu/release/deps/fallible_iterator-9ab9b312481cd614.d"

The build.log you'll get -- 

Could not perform distributed compile, falling back to local: failed to rewrite outputs from compile: No outputs matched dep info file /tmp/portage/www-client/firefox-128.12.0/work/firefox_build/instrumented/release/deps/unicode_ident-9a905afffc6beb9c.d

And the compile happen locally, not at the remote sccache-dist server. 

The main difference between my approach and this article is that I rely on RUSTC_WRAPPER (and cargo) to pass on the compilation process to the remote build server where as this article believes in creating symlink system after which sccache must work for all packages, not only just rust which was something I was not trying to achieve.

Also the toolchain will be copied over from the client (the machine which is actually initiating the compiling) to the build server. In case the toolchain's binaries are not compatible with the build server (which IS probably the case with most of gentoo's toolchain, but not with rust-bin), it won't workout. The toolchain binaries will error out on the build server.

Regardless, let's begin.

First sccache must be build with dist-client dist-server on both the client and build server.

Add the following to make.conf of the client machine -- 

RUSTC_WRAPPER=/usr/bin/sccache
SCCACHE_MAX_FRAME_LENGTH=104857600
SCCACHE_IGNORE_SERVER_IO_ERROR=0
SCCACHE_DIR=/var/tmp/sccache
SCCACHE_CACHE_SIZE=5G
SCCACHE_CONF=/etc/sccache-client.toml
FEATURES="-ipc-sandbox -network-sandbox -network-sandbox-proxy -pid-sandbox"

Create the client config on the client machine -- 

[dist]
scheduler_url = "http://
<IP address of you build server>:64888"
toolchains = []
cache_dir = "/var/tmp/sccache-dist"
[dist.auth]
type = "token"
token = "
<a plane text secret>"

And save it as /etc/sccache-client.toml

chown portage:portage /var/tmp/sccache-dist /var/tmp/sccache

Create scheduler config on the builder machine -- 

public_addr = "0.0.0.0:64888"
[client_auth]
type = "token"
token = "<a plane text secret>"
[server_auth]
type = "jwt_hs256"
secret_key = "<generate using `sccache-dist auth generate-shared-token`"

Assuming file is saved as /etc/sccache-sched.toml.

Run the scheduler (as user) --

SCCACHE_NO_DAEMON=1 sccache-dist scheduler --config /etc/sccache-sched.toml

Create build server config --

public_addr = "<IP address of you build server>:8889"
scheduler_url = "http://
<IP address of you build server>:64888"
cache_dir = "/var/tmp/sccache_server_toolchain/"
[builder]
type = "overlay"
build_dir = "/var/tmp/sccache_builddir/"
bwrap_path = "/usr/bin/bwrap"
[scheduler_auth]
type = "jwt_token"
token = "<generate using `sccache-dist auth generate-jwt-hs256-server-token --server 
<IP address of you build server>:8889 --config /etc/sccache-sched.toml `"

Assume config location as /etc/sccache-build.toml.

Run the build server (as root) -- 

SCCACHE_NO_DAEMON=1 sccache-dist server --config /etc/sccache-build.toml

Thursday, June 19, 2025

Impact of mdadm -c, --chunk on random read/write performance and disk space utilization.

No one knows exactly what this is in context of mdadm, but this must be the minimum i/o size of the RAID block device. Regardless, I did some random read/write tests using various chunk sizes using seekmark. mdadm RAID creation parameters -- 

mdadm -C /dev/md/test -l 5 --home-cluster=xxx --homehost=any -z 10G -p left-symmetric -x 0 -n 3 -c 512K|64K --data-offset=8K -N xxxx -k resync 

XFS format parameters -- 

mkfs.xfs -m rmapbt=0,reflink=0

Seekmark commands -- 

seekmark -i $((32*1024)) -t 1 -s 1000 -f /mnt/archive/test-write
seekmark -i $((64*1024)) -t 1 -s 1000 -f /mnt/archive/test-write
seekmark -i $((128*1024)) -t 1 -s 1000 -f /mnt/archive/test-write
seekmark -i $((256*1024)) -t 1 -s 1000 -f /mnt/archive/test-write

512K chunks -- 

seekmark 32K: 163.64
seekmark 64K: 153.89
seekmark 128K: 145.77
seekmark 256K: 130.16

64K chunks -- 

seekmark 32K: 145.33
seekmark 64K: 133.40
seekmark 128K: 121.04
seekmark 256K: 99.60

Unit is seeks/sec

Therefore, for some reason 512K chunks win even for small reads.

For 32K writes, I was getting Around 53 seeks/s write using 512K chunks and 49 seeks/s for 64K chunks, so here too large chunk size wins by a small margin (and maybe there no difference at all).

For the disk space utilization, large chunk size too wins when used with the same underlying xfs FS. For the test, 400000 4K sized files where created. At 4K chunk size 1.9G of space was used and at 16K chunk size, 1.8G space was used.

Tuesday, June 10, 2025

mdadm (RAID 5) performance under different parity layouts (-p --parity --layout)

While performance of right-asymmetric, left-asymmetric, right-symmetric, left-symmetric, is roughly the same, the performance of parity-last and parity-first seems strikingly fast for reads.

Tests were done on a RAID 5 setup over 3 USB hard drivers each have 10TB capacity. Each HDD is capable of 250+ MBPS simultaneously (therefore the USB link is not saturated).

The optimal chunk size for right-asymmetric, left-asymmetric, right-symmetric, left-symmetric starts at 32KB where the sequential read speeds are around 475MB/s. At 256KB and 512KB chunks, the read speeds slightly improve to around 483MB/s. Below 32KB chunks, the read speeds suffer significantly where I get 120MB/s reads at 4K chunks. The write speeds are around 480MB/s even for 4KB chunks and remains the same even for 512KB chunks (no tests where done beyond this size).

With parity-last/first you can afford to have a lower chunk size with the same read performance. For e.g. at 16K chunks, I was getting writes of 488MB/s and reads of 478MB/s writes. However the best lowest chunk size was 32K where I was getting 490MB/s writes and 506MB/s reads. The performance remained the same upto 512K chunk size. Therefore in a 3 disk RAID-5 setup, parity-last/first gives the optimal performance at a lower chunk size (compared to other parity layouts) which MUST be a good deal, however as per other tests done both lower chunk size and parity-last/first is not a good idea.

The problem with parity-last/first is that the writes do not scale beyond 2 data disks (i.e. 3 disks in total), which was a RAID-4 problem and parity-last/first IS a raid 4 layout. Technically, the random writes must not scale and it must not impact sequential writes, but it seems it does not scale even for sequential writes. Synthetic tests where done by starting a VM in qemu with 5 block devices, each of which was throttled to 5MB/s. These are the tests done (with 5 disks) -- 

create qemu-storage --
qemu-img create -f qcow2 -o lazy_refcounts=on RAID5-test-storage1.qcow2 20G
qemu-img create -f qcow2 -o lazy_refcounts=on RAID5-test-storage2.qcow2 20G
qemu-img create -f qcow2 -o lazy_refcounts=on RAID5-test-storage3.qcow2 20G
qemu-img create -f qcow2 -o lazy_refcounts=on RAID5-test-storage4.qcow2 20G
qemu-img create -f qcow2 -o lazy_refcounts=on RAID5-test-storage5.qcow2 20G

Launch qemu -- 

qemu-system-x86_64 -machine accel=kvm,kernel_irqchip=on,mem-merge=on -drive file=template_trixie.raid5.qcow2,id=centos,if=virtio,media=disk,cache=unsafe,aio=threads,index=0 -drive file=RAID5-test-storage1.qcow2,id=storage1,if=virtio,media=disk,cache=unsafe,aio=threads,index=1,throttling.bps-total=$((5*1024*1024)) -drive file=RAID5-test-storage2.qcow2,id=storage2,if=virtio,media=disk,cache=unsafe,aio=threads,index=2,throttling.bps-total=$((5*1024*1024)) -drive file=RAID5-test-storage3.qcow2,id=storage3,if=virtio,media=disk,cache=unsafe,aio=threads,index=3,throttling.bps-total=$((5*1024*1024)) -drive file=RAID5-test-storage4.qcow2,id=storage4,if=virtio,media=disk,cache=unsafe,aio=threads,index=4,throttling.bps-total=$((5*1024*1024)) -drive file=RAID5-test-storage5.qcow2,id=storage5,if=virtio,media=disk,cache=unsafe,aio=threads,index=5,throttling.bps-total=$((5*1024*1024)) -vnc [::1]:0 -device e1000,id=ethnet,netdev=primary,mac=52:54:00:12:34:56 -netdev tap,ifname=veth0,script=no,downscript=no,id=primary -m 1024 -smp 12 -daemonize -monitor pty -serial pty > /tmp/vm0_pty.txt

mdadm parameters for parity-last/first --

mdadm -C /dev/md/bench -l 5 --home-cluster=archive10TB --homehost=any -z 1G -p parity-last -x 0 -n 5 -c 512K --data-offset=8K -N tempRAID -k resync /dev/disk/by-path/virtio-pci-0000:00:0{5..9}.0

mdadm parameters for left-symmetric --

mdadm -C /dev/md/bench -l 5 --home-cluster=archive10TB --homehost=any -z 1G -p left-symmetric -x 0 -n 5 -c 512K --data-offset=8K -N tempRAID -k resync /dev/disk/by-path/virtio-pci-0000:00:0{5..9}.0

Write test --
cat /dev/urandom | tee /dev/stdout | tee /dev/stdout| tee /dev/stdout| tee /dev/stdout| tee /dev/stdout| tee /dev/stdout | tee /dev/stdout| tee /dev/stdout| tee /dev/stdout | dd of=/dev/md/bench bs=1M count=100 oflag=direct iflag=fullblock

 Read test --
dd if=/dev/md/bench of=/dev/null bs=1M count=100 iflag=direct

For the writes, I was getting getting 10MB/s with parity-last and 13.4MB/s left-symmetric (34% higher).

For reads I was getting 21.8MB/s with parity-last and 27.6MB/s with left-symmetric

Therefore it seems left-symmetric was scaling better in every way.

To ensure nothing was wrong with the test setup, I repeated the same test for parity-last/first with 3 disks instead and I was getting 10.7MB/s writes and 10.7MB/s reads.

With this I come to the conclusion, that parity-last/first scales for writes for at best 2 disks in the best case scenario. Yes, agree I was getting a little extra speed for reads with left-symmetric with 5 disks (because theoretically it must be upto 20MB/s), but why did it happen exactly is beyond my understanding.

As of why smaller chunk size is not a good idea, I'll write about that in another blog post.

Wednesday, May 28, 2025

Re-writing HDDs to avoid bitrot/degradation under linux.

Over the years your archival HDDs are susceptible to bitrot. You've to re-write them on regular intervals to prevent that. You can use dd for that --

dd=/dev/sdX of=/dev/sdX bs=1M conv=notrunc iflag=fullblock

This is even resilient to power failures (I tested that over a VM).

Tuesday, April 29, 2025

ffmpeg: Audio/video out of sync in ffmpeg when frame rate limit is set using -r.

 If you've specified -r at the input, you may like to try moving it before the -vcodec to resolve the issue. With this change, the input is not frame limited, but the encoding is frame limited.

Ext4 vs xfs (with and without rmapbt) massive small file operations benchmark

 Methodology

/mnt/tmpfs/ contains trimmed linux sources. Large files where removed to reduce the size of the total storage to 5GB. /mnt/tmpfs/ is a tmpfs filesystem.

The following are the benchmarks done --
Copy operation --
time cp -a /mnt/tmpfs/* /mnt/temp/
Cold search --
time find /mnt/temp/ -iname '*a*' > /dev/null
Warm search --
time for i in {a..j}; do find /mnt/temp/ -iname "*$i*" > /dev/null; done
read all files in an alphabetic way (cold) --
time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
read all files in an alphabetic way (warm) --
time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
Write a certain small value to all files alphabetically (check for CPU utilization too of the script) --
cd /mnt/temp/
find /mnt/temp/ -type f > /tmp/flist.txt
dd if=/dev/urandom of=/tmp/write_data bs=1K count=6
time write_mulitple_files.rb /tmp/flist.txt /tmp/write_data
Delete dir tree --
time rm -rf /mnt/temp/*

HDD benchmarks

mount and mkfs options

mount paramters for xfs - -
mount -o logbufs=8,logbsize=256k,noquota,noatime

mount parameters for ext4 -- 
mount -o noatime,data=writeback,journal_async_commit,inode_readahead_blks=32768,max_batch_time=10000000,i_version,noquota,delalloc
nodelalloc was removed since bigalloc was removed.
ext4 is optimized for small + large files. It shouldnt make a difference in performance.

format parameters for xfs and ext4 -- 
mkfs.ext4 -g 256 -G 4 -J size=100 -m 1 -O none,extent,flex_bg,has_journal,large_file,^uninit_bg,dir_index,dir_nlink,^sparse_super,^sparse_super2 -i 4096
bigalloc had to be removed because of large no. of inodes (Expect worst performance with larger files, which this benchmark does not cover).
 
mkfs.xfs -f -m rmapbt=0,reflink=0

Results -- 

ext4 --
Create/copy --
0m27.925s
Cold search --
0m0.157s
Warm search --
0m1.509s
read all files in an alphabetic way (cold) (parallel) --
0m0.253s
read all files in an alphabetic way (warm) (parallel) --
0m0.252s
Write a certain small value to all files alphabetically in parallel --
11m41.727s
Delete dir tree --
0m1.161s

xfs --
Create/copy --
0m21.857s
Cold search --
0m0.081s
Warm search --
0m0.752s
read all files in an alphabetic way (cold) (parallel) --
0m0.239s
read all files in an alphabetic way (warm) (parallel) --
0m0.238s
Write a certain small value to all files alphabetically in parallel --
11m43.711s
Delete dir tree --
0m1.086s

Conclusion -- 

Despite rmapbt being disabled in XFS (which improves performance with small files), XFS is faster than ext4 in most tests. If this ext4 FS (which is optimized for large files) is used for operations on large files, expect lower performance.

SSD benchmarks

mount and mkfs options

blkdiscard done before each benchmark.
 
 
mount paramters for xfs - -
mount -o logbufs=8,logbsize=256k,noquota,noatime

mount parameters for ext4 -- 
mount -o noatime,data=writeback,journal_async_commit,inode_readahead_blks=32768,max_batch_time=10000000,i_version,noquota,delalloc
nodelalloc was removed since bigalloc was removed.
ext4 is optimized for small + large files. It shouldnt make a difference in performance.

format parameters for xfs and ext4 -- 
mkfs.ext4 -g 256 -G 4 -J size=100 -m 1 -O none,extent,flex_bg,has_journal,large_file,^uninit_bg,dir_index,dir_nlink,^sparse_super,^sparse_super2 -i 4096
bigalloc had to be removed because of large no. of inodes (Expect worst performance with larger files, which this benchmark does not cover).
 
xfs with no rmapbt --
mkfs.xfs -f -m rmapbt=0,reflink=0

xfs with rmapbt -- 
mkfs.xfs -f -m rmapbt=1,reflink=0

Results -- 

ext4 --
    Copy operation --
    time cp -a /mnt/tmpfs/* /mnt/temp/
        real    0m48.826s
        user    0m0.204s
        sys     0m3.005s
        
        real    0m48.290s
        user    0m0.246s
        sys     0m2.898s

    Cold search --
    time find /mnt/temp/ -iname '*a*' > /dev/null
        real    0m0.172s
        user    0m0.074s
        sys     0m0.097s
        
        real    0m0.169s
        user    0m0.064s
        sys     0m0.105s
        
    Warm search --
    time for i in {a..j}; do find /mnt/temp/ -iname "*$i*" > /dev/null; done
        real    0m1.616s
        user    0m0.536s
        sys     0m1.075s
        
        real    0m1.651s
        user    0m0.615s
        sys     0m1.031s
        
    read all files in an alphabetic way (cold) --
    time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
    real    0m0.444s
    user    0m0.227s
    sys     0m2.850s
    
    real    0m0.402s
    user    0m0.271s
    sys     0m2.793s
    
    read all files in an alphabetic way (warm) --
    time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
    real    0m0.407s
    user    0m0.230s
    sys     0m2.851s
    
    real    0m0.402s
    user    0m0.223s
    sys     0m2.845s
    
    Write a certain small value to all files alphabetically (check for CPU utilization too of the script) --
    cd /mnt/temp/
    find -type f > /tmp/flist.txt
    dd if=/dev/urandom of=/tmp/write_data bs=1K count=6
    time /home/de/small/docs/Practice/Software/ruby/write_mulitple_files.rb /tmp/flist.txt /tmp/write_data
    real    9m59.305s
    user    9m53.748s
    sys     0m51.903s
    
    real    9m38.867s
    user    9m33.476s
    sys     0m49.930s
    
    Delete dir tree --
    time rm -rf /mnt/temp/*
    real    0m0.824s
    user    0m0.021s
    sys     0m0.743s
    
    real    0m0.820s
    user    0m0.038s
    sys     0m0.718s
xfs rmapbt=0
    Copy operation --
    time cp -a /mnt/tmpfs/* /mnt/temp/
    real    0m14.851s
    user    0m0.298s
    sys     0m3.860s
    
    Cold search --
    time find /mnt/temp/ -iname '*a*' > /dev/null
    real    0m0.082s
    user    0m0.054s
    sys     0m0.027s
    
    
    Warm search --
    time for i in {a..j}; do find /mnt/temp/ -iname "*$i*" > /dev/null; done
    real    0m0.694s
    user    0m0.511s
    sys     0m0.179s
    
    read all files in an alphabetic way (cold) --
    time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
    real    0m0.389s
    user    0m0.277s
    sys     0m2.680s
    
    
    read all files in an alphabetic way (warm) --
    time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
    real    0m0.388s
    user    0m0.256s
    sys     0m2.705s

    
    Write a certain small value to all files alphabetically (check for CPU utilization too of the script) --
    cd /mnt/temp/
    find /mnt/temp/ -type f > /tmp/flist.txt
    dd if=/dev/urandom of=/tmp/write_data bs=1K count=6
    time /home/de/small/docs/Practice/Software/ruby/write_mulitple_files.rb /tmp/flist.txt /tmp/write_data
    real    10m45.878s
    user    10m40.476s
    sys     0m7.636s
    
    Delete dir tree --
    time rm -rf /mnt/temp/*
    real    0m1.181s
    user    0m0.030s
    sys     0m0.482s
xfs rmapbt=1
    Copy operation --
    time cp -a /mnt/tmpfs/* /mnt/temp/
    real    0m2.883s
    user    0m0.159s
    sys     0m2.556s

    
    Cold search --
    time find /mnt/temp/ -iname '*a*' > /dev/null
    real    0m0.082s
    user    0m0.049s
    sys     0m0.033s
    
    Warm search --
    time for i in {a..j}; do find /mnt/temp/ -iname "*$i*" > /dev/null; done
    real    0m0.700s
    user    0m0.480s
    sys     0m0.216s
    
    read all files in an alphabetic way (cold) --
    time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
    real    0m0.389s
    user    0m0.218s
    sys     0m2.752s
    
    read all files in an alphabetic way (warm) --
    time find /mnt/temp/ -type f | xargs -d $'\n' -r -P 100 -n 300 -L 300 cat > /dev/null
    real    0m0.389s
    user    0m0.229s
    sys     0m2.739s
    
    Write a certain small value to all files alphabetically (check for CPU utilization too of the script) --
    cd /mnt/temp/
    find /mnt/temp/ -type f > /tmp/flist.txt
    dd if=/dev/urandom of=/tmp/write_data bs=1K count=6
    time /home/de/small/docs/Practice/Software/ruby/write_mulitple_files.rb /tmp/flist.txt /tmp/write_data
    real    8m53.297s
    user    8m48.394s
    sys     0m9.786s
    
    Delete dir tree --
    time rm -rf /mnt/temp/*
    real    0m2.373s
    user    0m0.024s
    sys     0m0.498s

Conclusion -- 

When comparing xfs rmapbt=1 and xfs rmapbt=0, rmapbt=1 wins on average (but not by a large margin).

When comparing xfs rmapbt=1 and ext4, xfs wins by a large margin

Monday, April 28, 2025

Debian trixie vs Gentoo benchmark.

Recently I came across this benchmark, which although old, but laughable (if you don't know why, I suggest you either readup more about machine code or remain a happy Ubuntu user) because of the inaccurate benchmark method in regards to Gentoo.

Also at this time I just installed Debian trixie (still in testing) for another machine and realized that versions of various applications in their repositories where striking similar. So I decided to to also do a casual benchmark, which although is not that accurate, but FAR more than that phoronix benchmark.

 Openssl (higher the better) -- 

 Firefox https://browserbench.org/Speedometer2.1/ (higher the better) -- 

 CPU and real run time of various CPU intensive applications (lower the better) -- 

 xz real and CPU time taken(lower the better) -- 

bash script benchmark results (lower the better) -- 


The machine is a Ryzen 5 PRO 2600 -- which is an old machine (x86_64-v3 instruction set). The highest contrast with the benchmark must be seen with newer processors specially x86_64-v4 (avx512) ones because binary distributions (except clearlinux) are optimized for x86_64 baselines which is 3 generations behind the latest. In short you're not fully utilizing your shiny new x86_64-v4 processors unless you use Gentoo. In these matters, even Windows is better off because it's hefty 'minimum requirement' just for running the OS implies they can compile binaries above the baseline x86_64 instruction set.

As of now, I'm not able to get chromium to run on Gentoo because of the GPU of the machine has been blacklisted as per chrome. It works on Intel platform though.

Many of the application may use assembly code. These application perform the same regardless of the of optimization applied by GCC. Common applications include openssl, various video codec libraries, prime95 etc... but I'm not entirely sure how much of assembly they're using; this is the reason why I chose sparsely used algos in openssl for benchmark purposes since the developer is less likely to do efforts for a less used algo.

Many applications are not bottlenecked by the CPU, even though it may seem so, that's because they put more stress on the memory speeds than the CPU. Even when the memory is the bottleneck, the CPU utilization is reported as 100% because of how closely the memory and CPU work. e.g. is compression workloads. In these benchmarks, there will not be much of a difference.

imagemagick's compare was able to run on all 12 CPUs on Debian, but only 2 CPUs on Gentoo. As a result, I limited the benchmark to 2 CPUs, however in this configuration, Debian's build of imagemagic took double the time to gentoo's. Because of the large difference I really doubt this is because of the optimization differences between the 2 builds. For larger images, gentoo's build is able to use all 12 CPUs, but since it was taking too much time (for both Debian and Gentoo) I abandoned it.

Package versions of Gentoo -- 

imagemagick-7.1.1.38-r2

bash-5.2_p37

openssl-3.3.3

firefox-128.8.0

ffmpeg-6.1.2-r1

xz-utils-5.6.4-r1

grep-3.11-r1

gcc - 14.2.1_p20250301 (all packages where built using this version. CFLAGS in make.conf where -march=znver1 --param=l1-cache-line-size=64 --param=l1-cache-size=32 --param=l2-cache-size=512 -fomit-frame-pointer -floop-interchange -floop-strip-mine -floop-block -fgraphite-identity -ftree-loop-distribution -O3 -pipe -flto=1 -fuse-linker-plugin -ffat-lto-objects -fno-semantic-interposition, however a few packages (like firefox) iron many of the CFLAGs out).

Package versions for Debian -- 

imagemagick-7.1.1.43+dfsg1-1

bash-5.2.37-1.1+b2

openssl-3.4.1-1

firefox-128.9.0esr-2

ffmpeg-7.1.1-1+b1

xz-utils-5.8.1-1

grep-3.11-4

gcc-14.2

The Debian is a fresh install, while the Gentoo installation is from 2009. Over the years, the same installation has been migrated/replicated across multiple machines. Debian was installed on a pendrive while Gentoo was installed on an SSD; of course disk i/o was noticed during the benchmark and only CPU was the bottleneck (there was no i/o wait). All data for the benchmark was loaded from an external HDD (here too disk i/o was not the bottleneck).

For the source of the benchmark download from here. These are it's contents -- 

script.sh -- The script which was run for the benchmark.

ff-bench_debian.png/ff-bench_gentoo.png -- Screenshot of FF benchmark (which of course the script did not run).

benchmark_results_debian.txt/result_gentoo.txt -- output of script.sh

shell_bench_Result_gentoo.txt/shell_bench_Result_gentoo.txt -- Output of shell-bench.sh on Gentoo/debian.

shell-bench.sh -- Grep and bash benchmark script.

Thursday, April 10, 2025

Debian trixie source.list (with unstable and experimental added) and corresponding apt pin configuration.

 This is the /etc/apt/source.list -- 

deb http://deb.debian.org/debian/ testing main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ testing main contrib non-free non-free-firmware
deb http://deb.debian.org/debian/ trixie main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ trixie main contrib non-free non-free-firmware
deb http://deb.debian.org/debian/ trixie-updates main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ trixie-updates main contrib non-free non-free-firmware

deb http://deb.debian.org/debian/ trixie-backports main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ trixie-backports main contrib non-free non-free-firmware

deb http://deb.debian.org/debian/ trixie-proposed-updates main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ trixie-proposed-updates main contrib non-free non-free-firmware

# ensure there is no testing-security and remove these
deb http://security.debian.org/debian-security testing-security updates/main updates/contrib updates/non-free updates/non-free-firmware
deb-src http://security.debian.org/debian-security testing-security updates/main updates/contrib updates/non-free updates/non-free-firmware
deb http://security.debian.org/debian-security trixie-security updates/main updates/contrib updates/non-free updates/non-free-firmware
deb-src http://security.debian.org/debian-security trixie-security updates/main updates/contrib updates/non-free updates/non-free-firmware

deb http://deb.debian.org/debian/ unstable main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ unstable main contrib non-free non-free-firmware

deb http://deb.debian.org/debian/ sid main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ sid main contrib non-free non-free-firmware

deb http://www.deb-multimedia.org/ trixie main non-free
deb-src http://www.deb-multimedia.org/ trixie main non-free

# not available now, but will be later
deb http://www.deb-multimedia.org/ trixie-backports main non-free
deb-src http://www.deb-multimedia.org/ trixie-backports main non-free

deb http://www.deb-multimedia.org/ testing main non-free
deb-src http://www.deb-multimedia.org/ testing main non-free

deb http://www.deb-multimedia.org/ unstable main non-free
deb-src http://www.deb-multimedia.org/ unstable main non-free

deb http://www.deb-multimedia.org/ experimental main non-free
deb-src http://www.deb-multimedia.org/ experimental main non-free
This is the corresponding pin configuration (you may place this in /etc/apt/preferences.d/custom)

Package: *
Pin: release n=trixie-security
Pin-Priority: 996

Package: *
Pin: release n=trixie-updates
Pin-Priority: 995

Package: *
Pin: release n=trixie
Pin-Priority: 991

Package: *
Pin: release n=trixie-proposed-updates
Pin-Priority: 990

Package: *
Pin: release n=trixie-backports
Pin-Priority: 550

Package: *
Pin: release n=trixie,o=Unofficial Multimedia Packages
Pin-Priority: 600

Package: *
Pin: release a=testing
Pin-Priority: 140

Package: *
Pin: release a=unsable
Pin-Priority: 130

Package: *
Pin: release a=experimental
Pin-Priority: 120

Thursday, March 27, 2025

xfs vs ext4 squential operation benchmark (both on HDD and nvme)

Methodology

sequentially writing a 5GB file with cache --
cat /dev/urandom | tee /dev/stdout | tee /dev/stdout| tee /dev/stdout| tee /dev/stdout| tee /dev/stdout| tee /dev/stdout | tee /dev/stdout| tee /dev/stdout| tee /dev/stdout | dd of=random iflag=fullblock bs=1M count=5120

Next write without cache --
rm random
sync; echo 3 > /proc/sys/vm/drop_caches
cat /dev/urandom | tee /dev/stdout | tee /dev/stdout| tee /dev/stdout| tee /dev/stdout| tee /dev/stdout| tee /dev/stdout | tee /dev/stdout| tee /dev/stdout| tee /dev/stdout | dd of=random iflag=fullblock bs=1M count=5120 oflag=direct

Read without cache --
sync; echo 3 > /proc/sys/vm/drop_caches
dd if=random of=/dev/null iflag=direct bs=1M

Read with cache --
dd if=random of=/dev/null bs=1M
 
Repeated read with cache --
sync; echo 3 > /proc/sys/vm/drop_caches
for i in {1..10}; do dd if=random of=/dev/null bs=1M; sleep 1; done
 

FS format parameters and mount options -- 

2 benchmarks will be done for XFS. one with rmapbt=0 and the other with rmapbt=1
These are the xfs parameters -- 
mkfs.xfs -f -m rmapbt=0,reflink=0
mkfs.xfs -f -m rmapbt=1,reflink=0

ext4 format options are either for large file or for both large and small files.
ext4 format options optimized for large files --
mkfs.ext4 -m 1 -O none,dir_index,extent,^flex_bg,^bigalloc,has_journal,large_file,sparse_super2,^uninit_bg 
 
ext4 format options optimized for both large and small files --
mkfs.ext4 -g 256 -G 4 -J size=100 -m 1 -C 2097152 -O none,bigalloc,extent,flex_bg,has_journal,large_file,sparse_super2,^uninit_bg,dir_index,dir_nlink,^sparse_super,^sparse_super2

xfs mount options -- 
mount -o logbufs=8,logbsize=256k,noquota,noatime

ext4 mount options (when formatted for large file optimization) -- 
mount -o noquota,noatime,data=writeback,journal_async_commit,inode_readahead_blks=32768,max_batch_time=10000000

Benchmark results

xfs rmapbt on vs off in nvme -- 

Without rmapbt
    sequentially writing a 5GB file with cache --
    2.3 GB/s
    
    Next write without cache --
    1.7 GB/s
    
    Read without cache --
    2.2 GB/s
    
    Read with cache --
    2.9 GB/s
    
    Repeated read with cache --
    2.8
    17.3
    16.2
    16.2
    16.2
    16.2
    16.3
    16.3
    16.1
    16.3
With rmapbt
    sequentially writing a 5GB file with cache --
    2.4 GB/s
    
    Next write without cache --
    1.7 GB/s
    
    Read without cache --
    2.2 GB/s
    
    Read with cache --
    2.8 GB/s
    
    Repeated read with cache --
    2.8 GB/s
    16.4 GB/s
    16.5 GB/s
    16.5 GB/s
    16.5 GB/s
    16.4 GB/s
    16.5 GB/s
    16.4 GB/s
    16.5 GB/s
    16.5 GB/s

Sequential read/write operations with rmapbt on/off in XFS

XFS (rmapbt=0) --
sequentially writing a 1GB file with cache --
116 MB/s,112 MB/s
Next write without cache --
105 MB/s
Read without cache --
104 MB/s
Read with cache --
104 MB/s
Read with cache again --
13.8 GB/s
Repeated read with cache --
This was done after formatting + sequentially writing a 1GB file with cache
105,17.4,17.3,17.3,17.3,17.3,17.4,17.4,17.4,17.4
Avg: 17.35555555555555555555

XFS format options with rmapbt=1 --
sequentially writing a 1GB file with cache --
115 MB/s
Repeated read with cache --
This was done after formatting + sequentially writing a 1GB file with cache
106, 13.9,13.8,13.4,13.9,13.7,14.1,13.9,13.8,14.0
Avg: 13.833333

Sequential read/write operations on ext4

ext4 (optimized for large files) --
sequentially writing a 1GB file with cache --
112 MB/s,112 MB/s
Next write without cache --
104 MB/s
Read without cache --
105 MB/s
Read with cache --
105 MB/s
Read with cache again --
11.2 GB/s
Repeated read with cache --
This was done after formatting + sequentially writing a 1GB file with cache
108,11.4,11.3,11.2,13.7,12.5,12.3,12.4,12.2,12.3
Avg:12.14444444444444444444

ext4 mount options optimized for both small and large files -
sequentially writing a 1GB file with cache --
115 MB/s
Repeated read with cache --
This was done after formatting + sequentially writing a 1GB file with cache
103,11.8,12.0,12.0,11.9,12.8,12.8,12.8,12.9,12.9
Avg: 12.433333

Conclusion -- 

For nvme/ssd, xfs with rmapbt on is the way for sequential operations on large file. This is also better than ext4 even for small file operations (benchmark published later).
For HDD Storage, xfs without rmapbt (or rmapbt=0) will perform the best.

Wednesday, December 11, 2024

Getting HP K209a-z to work on Gentoo (hpaio SANE backend)

This printer requires the hpaio SANE backend which will be made available if you install net-print/hplip[-minimal] along with sane-backends. You can have no USE enabled in SANE_BACKENDS and it'll still work.

Tuesday, November 5, 2024

k8sCertsFlatify: Script to dump TLS certificates in Kubernetes to flat files.

A script which extracts 1 or multiple tls/ssl certificates from a kubernetes cluster to PWD.
NOTE: Will not check the TLS certificate of the connecting kubernetes cluster as of the current time.
Switch --kubeconfig/-c <kubernetes kubeconfig file>. If not present default to ~/.kube/config
switch --namespaces/-n -- Namespace to dump certificates in. If not present, will dump certificates of all namespaces.
--context/-k -- The context to use in the kubeconfig file.
--dumpdir/-d -- Dump certificates to this directory instead of PWD
Will dump certificates in PWD in a directory with the name as the DNS to which the certificate belongs.

Install the Ruby gem (gem install k8sCertsFlatify) or use the docker image detechno/k8scertsflatify

Monday, October 14, 2024

Canon Pixma E470 on Linux.

This printer works out of the box on Debian 10 (which is deprecated as of now). It was added to CUPS automatically. I'm not sure which packages provided the filters or ppds...

The scanner also works out of the box with sane.

Apache + PHP: too many connections in CLOSE-WAIT state.

 This is happening because your PHP scripts are not exiting. They hang up for whatsoever reason (like waiting form some network i/o etc...). The client has requested a closure of the connection but apache will not close the connection (or consider it closed) until the script terminates, and therefore Apache will not close the TCP socket by doing the right system calls.

In case you where wondering, max_execution_time does not count the time taken by external commands which are executed or other external I/O activity like connection to databases etc...

Sunday, September 22, 2024

Clang "error: unknown target triple 'unknown'" on Gentoo.

 If you've set 'LLVM_TARGETS=' remove it and rebuild llvm + clang (in order). I couldn't reproduce the bug however. If you face it, ensure to file a bug citing this blog or post because now there are multiple users facing it.

Friday, August 23, 2024

agetty autologin prompting for password

You've added --autologin to agetty in whatever ways (like modifying or creating a new systemd unit). In that vtty, the user DOES login but not without a password prompt.

Check for --login-options/-o and try removing it. It interferes with -a, --autologin

Monday, April 1, 2024

Prevent systemd from killing a child (sub-process) process on a service stop.

Because of cgroups, systemd knows which process belongs to which systemd unit even if it daemonizes (even to systemd/init). Therefore if you stop a systemd service unit (like a display manager), it’ll kill all processes which was spawned directly or indirectly by the display manager. To prevent this from happening, you need to play around with cgroup. 

Find the target PID which you want to avoid being killed. Then move it to another cgroup which systemd did not make (this is all about echoing it’s Pid to the new cgroup) --

 echo <pid> >> /sys/fs/cgroup/cgroup.procs

Now systemd has lost track of this process and you can stop the systemd service unit without systemd killing the process.

Monday, March 4, 2024

Washermod vs contact frame.

I recently got a contact frame and replaced it with washermod -- benchmarked it and found no difference.

Monday, December 25, 2023

Secure openwrt WOL with no open ports (firewall/nat etc...)

The objective of this article is to achieve WOL in a setup where Internet access is behind a NAT or has a firewall which allows no open connections. We'll also cover the security aspect using purely iptables (instead of openwrt's built in firewall) -- this's particularly important since the openwrt installed on the router is outdated and it's discontinued (so it won't receive any security updates).

To achieve WOL, we'll be using a simple shell script which will periodically download a text file and check it's contents; for a certain value within the text file, it'll trigger a WOL for a certain hardware address. Here is the script -- 

#! /bin/ash
while [[ j != k ]]
do
    if test '<wol string>' = "$(wget -q -O - -U 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Edg/91.0.864.37' --no-check-certificate '<URL of text file>')"
    then
        /usr/bin/etherwake -D -i '<interface>' <hardware address of your system>
        sleep 30
    fi
done

For this you need to install the etherwake package.

<wol string> is the string written in the text file. For this string the WOL signal will be emitted. Therefore to disable WOL, you need to modify the text file to anything else other than this string.

<URL of text file> is a HTTP link. This may point to an s3 object which is a good candidate or any online office document (something hosted by google drive). Regardless, you must be directly able to download a text file using the link using wget.

<interface> is the interface via which your to-be-wol system is accessible.

Make a file /usr/bin/wol.sh, write the script there and -- 

chmod 755 /usr/bin/wol.sh

Add /usr/bin/wol.sh to the local startup script (found in luci in the startup page) as -- 

/usr/bin/wol.sh &

 And you're done!

Now for the firewall part. I've disabled the buitin firewall of openwrt because it was not working as expected -- 

service firewall disable

Reboot router.

Add the firewall rules -- 

iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT -o <router interface> -p icmp -s <router IP> -d <your system IP>,<default gateway IP>,255.255.255.255,<broadcast IP of your subnet> -j ACCEPT
iptables -A INPUT -i <router interface> -p icmp -s <your system IP>,<default gateway IP> -d <router IP>,255.255.255.255,<broadcast IP of your subnet>  -j ACCEPT
iptables -A INPUT -i <router interface> -p tcp -m conntrack --ctstate NEW,RELATED,ESTABLISHED --dport <ssh port of your router> -s <your system IP> -d <router IP> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -d <your system IP> -s <router IP> --sport <ssh port of your router> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p udp -m conntrack --ctstate NEW,RELATED,ESTABLISHED --dport 53 -d <DNs server IP> -j ACCEPT
iptables -A INPUT -i <router interface> -p udp -m conntrack --ctstate RELATED,ESTABLISHED --sport 53 -s <DNs server IP> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p udp -m conntrack --ctstate NEW,RELATED,ESTABLISHED --dport 123 -d <NTP server IP> -j ACCEPT
iptables -A INPUT -i <router interface> -p udp -m conntrack --ctstate RELATED,ESTABLISHED --sport 123 -s <NTP server IP> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p tcp -m conntrack --ctstate NEW,RELATED,ESTABLISHED -d <list of public IPs> -s <router IP> -m multiport --dports 80,443 -j ACCEPT
iptables -A INPUT -i <router interface> -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -m multiport --sports 80,443 -s <list of public IPs> -d <router IP> -j ACCEPT
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

<your system IP> is the system using which you're accessing the router over SSH.

This system of rules assume you access the luci GUI over ssh tunneling which is recommended.

You need to change your ntp servers to something fixed -- otherwise most NTP server DNS has so many IPs behind it... Good luck finding such a service.

<list of public IPs> is the list of public IPs of the service provider hosting your text file which the WOL script will monitor. Best of luck finding that.

After ensuring you're not cut off ssh access (otherwise reboot and then reattempt to fix the firewall rules) -- 

iptables-save > /etc/custom-iptables

Then add to the local startup stript (via luci GUI) -- 

iptables-restore < /etc/custom-iptables

Test all desired functionality.

Tuesday, November 7, 2023

Washer mod results on an i3.

I noticed that the temps on my i3 (Alder lake) was pretty high for an i3. So I did a washer mod and calculated an approx 12 degree drop in temps. FYI.

Wednesday, July 19, 2023

Using the script command to record all your shell output and commands transparently.

 In your bashrc file (either /etc/bash.bashrc, or /etc/bashrc or /etc/bash/bashrc etc...) add the following lines by the very end -- 

if test -z "$script_running"; then export script_running=1; script -a <destination directory>`date +%s`.txt; exit; fi

AFTER creating <destination directory> -- this is the place where all your recordings will be placed.

Monday, June 19, 2023

Improving cooling of laminar cooler (on steroids, faster fans, mod/hack) by replacing it's stock fan.

The great thing about Intel's laminar coolers is that you can take the fan off by removing 4 screws -- 


Here I have it attached on the motherboard after removing the fan.

Now you can attach a much more powerful fan on it by using hot glue on the plastic clips (4 in no.; the thing that fixes the heat sink to the motherboard). If you wish to attach a smaller fan, you can stick it directly to the copper heat sink.

Hot glue sticks are good enough for the purpose and is easy to take off when the need arises. Here is the result -- 


Here I have a 92mm PWM server fan attached on the heat sink.

This resulted in 5 degree lower temps.

Wednesday, September 21, 2022

Promql query to get the average/max/min CPU utlization, network rate and memory

CPU utilization in the last 24 hours -- 

highest –
100 - min_over_time((avg without(cpu)(((node_cpu_seconds_total{mode=`idle`} - (node_cpu_seconds_total{mode=`idle`} offset 1m))/60*100)))[24h:1m])

lowest -- 

100 - max_over_time((avg without(cpu)(((node_cpu_seconds_total{mode=`idle`} - (node_cpu_seconds_total{mode=`idle`} offset 1m))/60*100)))[24h:1m])

 Average -- 

100 - ((avg without(cpu) (max_over_time(node_cpu_seconds_total{mode="idle"}[24h])) - avg without(cpu) (min_over_time(node_cpu_seconds_total{mode="idle"}[24h])))/86400*100)

Network upload/download rate (MBPS) for an interface in the last 24 hours -- 

Average -- 

((max_over_time(node_network_receive_bytes_total{device="team0"}[24h]) – min_over_time(node_network_receive_bytes_total{device="team0"}[24h]))/86400)/1024/1024

((max_over_time(node_network_transmit_bytes_total{device="team0"}[24h]) – min_over_time(node_network_transmit_bytes_total{device="team0"}[24h]))/86400)/1024/1024

Lowest -- 

min_over_time(((delta(node_network_receive_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

min_over_time(((delta(node_network_transmit_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

Highest -- 

max_over_time(((delta(node_network_receive_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

max_over_time(((delta(node_network_transmit_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

Memory utilization (in %) in the last 24 hours -- 

Average -- 

avg_over_time((((node_memory_MemTotal_bytes)-(node_memory_MemAvailable_bytes))/node_memory_MemTotal_bytes*100)[24h:1m])

minimum -- 

min_over_time((((node_memory_MemTotal_bytes)-(node_memory_MemAvailable_bytes))/node_memory_MemTotal_bytes*100)[24h:1m])

maximum -- 

max_over_time((((node_memory_MemTotal_bytes)-(node_memory_MemAvailable_bytes))/node_memory_MemTotal_bytes*100)[24h:1m])

Memory utilization (in GB) in the last 24 hours --

Average -- 

avg_over_time(((node_memory_MemTotal_bytes/1024/1024/1024)-(node_memory_MemAvailable_bytes/1024/1024/1024))[24h:1m])

Minimum -- 

min_over_time(((node_memory_MemTotal_bytes/1024/1024/1024)-(node_memory_MemAvailable_bytes/1024/1024/1024))[24h:1m])

Maximum -- 

max_over_time(((node_memory_MemTotal_bytes/1024/1024/1024)-(node_memory_MemAvailable_bytes/1024/1024/1024))[24h:1m])

Monday, October 4, 2021

Ignoring XXX because its extensions are not built. Try: gem pristine…

 After trying out whatever tips and tricks that others have suggested, and this issue still doesn't resolve, this maybe a permission issue; that's why things might running as root.

And no -- it's not less permissions, it maybe related to MORE permissions -- for certain files, the group or others executable permission bits might have been set. To fix this -- 

find <gem paths> -type f -perm -u=x -exec chmod g+x,o+x {} +

find <gem paths> -type f -perm -u=rx -exec chmod g+rx,o+rx {} +

Of course if you're planning to use the gems system wide, all files and directories must be readable -- 

find <gem path> -type f -exec chmod o+r,g+r {} +; find <gem path> -type d -exec chmod o+rx,g+rx {} +


Sunday, June 6, 2021

Backporting gtk-gnutella on Debian buster.

 It seems Debian 10 does not have this package in the repository, but Debian unstable has. So we'll try building a deb for Debian buster -- 

aptitude install libdbus-1-dev libglib2.0-dev libgnutls28-dev=3.6.7-4+deb10u6 libgtk2.0-dev libxml2-dev zlib1g-dev fakeroot

apt-get source --compile gtk-gnutella

This'll result in the deb being generated. Install it -- 

dpkg -i gtk-gnutella_1.1.15-1_amd64.deb

Alternatively, you may download the deb directly -- 

https://drive.google.com/file/d/1YAMfQpgwWGWotwG7NZtRO-WNZMBHobCF/view?usp=sharing

Cleanup -- 

aptitude markauto libdbus-1-dev libglib2.0-dev libgnutls28-dev libgtk2.0-dev libxml2-dev zlib1g-dev fakeroot
apt-get autoremove

Debian buster -- Working VAAPI (hardware video decoding) for newer intel hardware (like ice lake/gen 11 intel GPU (UHD)).

In case you cannot get hardware video acceleration to work on your new Intel processor, apart from trying to install the backported kernel, you may also need a newer intel-media-va-driver (as of the current time 21.1.1 is the latest from testing).

In this article, it'll be shown how to backport these yourself (since no backports are available) from testing. Alternatively, you can find prebuild backports from here -- 

 https://drive.google.com/file/d/10rcxvetlJbe4wMUijficd-263S_QYhIj/view?usp=sharing

Extract and install all the debs (dpkg -i *.deb)

To test -- 

LIBVA_DRIVER_PATHS=/usr/lib/x86_64-linux-gnu/dri/ LIBVA_DRIVER_NAME=iHD vainfo

In case you want to build this yourself, take the following instructions -- 

Add the following to /etc/apt/sources.list -- 

deb http://mirror.csclub.uwaterloo.ca/debian-multimedia/ stable main
deb-src http://mirror.csclub.uwaterloo.ca/debian-multimedia/ stable main
#bullseye
deb http://mirror.csclub.uwaterloo.ca/debian/ bullseye main contrib non-free
deb-src http://mirror.csclub.uwaterloo.ca/debian/ bullseye main contrib non-free
deb http://security.debian.org/debian-security bullseye/updates main contrib non-free
deb-src http://security.debian.org/debian-security bullseye/updates main contrib non-free
deb http://mirror.csclub.uwaterloo.ca/debian/ bullseye-updates main contrib non-free
deb-src http://mirror.csclub.uwaterloo.ca/debian/ bullseye-updates main contrib non-free

#sid
deb http://mirror.csclub.uwaterloo.ca/debian/ sid main contrib non-free
deb-src http://mirror.csclub.uwaterloo.ca/debian/ sid main contrib non-free

Next install packages --

aptitude install debhelper=13.3.3~bpo10+1 dwz=0.13-5~bpo10+1 libdrm-dev libgl1-mesa-dev libwayland-dev libx11-dev libxext-dev libxfixes-dev pkg-config build-essential libset-scalar-perl

Generate debs to be installed -- 

apt-get source --compile libva=2.10.0-1

Install all the resulting debs -- 

dpkg -i libva-dev_2.10.0-1_amd64.deb libva-drm2_2.10.0-1_amd64.deb libva-glx2_2.10.0-1_amd64.deb libva-wayland2_2.10.0-1_amd64.deb libva-x11-2_2.10.0-1_amd64.deb libva2_2.10.0-1_amd64.deb

Install build-depends of intel-media-driver -- 

aptitude install debhelper=13.3.3~bpo10+1 dh-sequence-libva cmake libigdgmm-dev=20.4.1+ds1-1 libx11-dev pkg-config

Generate the debs -- 

apt-get source --compile intel-media-driver=21.1.1+dfsg1-1

And install the generated debs.

Cleanup -- 

aptitude markauto debhelper dwz libdrm-dev libgl1-mesa-dev libwayland-dev libx11-dev libxext-dev libxfixes-dev pkg-config build-essential libset-scalar-perl libva-dev libva-drm2 libva-glx2 libva-wayland2 libva-x11-2 libva2 dh-sequence-libva cmake libigdgmm-dev libx11-dev pkg-config

apt-get autoremove

Tuesday, May 25, 2021

Error: Server asked us to run CSD hostscan.

Anyconnect has provisions of a ‘CSD script’… via which basically a remote program which’ll be downloaded from the VPN server and will be executed on the host machine to gather information about it and to be sent to the server.

If a VPN server mandates running such a scan the following errors will come up –

"Error: Server asked us to run CSD hostscan."

For openconnect, you’ve to download external CSD scripts. There are 2 CSD scripts – which communicate to the VPN server either via post or by some other means.

https://gist.githubusercontent.com/l0ki000/56845c00fd2a0e76d688/raw/61fc41ac8aec53ae0f9f0dfbfa858c1740307de4/csd-wrapper.sh

The above is a script sends the collected info via non-POST means. Another official, openconnect CSD script sends it via POST. It’s called csd-post.sh. If you’ve used the wrong script, the following errors will occur –

"Refreshing +CSCOE+/sdesktop/wait.html after 1 second"

Repetitively.

In the above csd-wrapper.sh script, you’ve edit it and fill in your VPN host’s DNS name in an environment variable.

Switches to openconnect –

--csd-wrapper <path to CSD wrapper script>

--csd-user <user name> – Run the CSD script as this user.


Wednesday, March 17, 2021

Restricting access based on IP on NFS v4 with fsid=0

There’s a scenario when you want to restrict people from mounting things under a directory, for e.g. /home/test/ based on their IP address; but as you know the /etc/exports entry for /home/test/ which has fsid=0 must allow for Ips which is a superset of all other host entries in /etc/exports (and under /home/test); otherwise access will be denied for the other entries. Here you can use nocrossmnt. With nocrossmnt for the /etc/exports entry if you’ve mount –bind inside a directory X inside /home/test, the NFS server will not allow the client to descent into X unless you’ve another entry for X in /etc/exports and if it explicitly allows the client’s IP to mount it.


Thursday, February 4, 2021

Running older systems (which need cgroupv1) on systems running over cgroupv2 (systemd.unified_cgroup_hierarchy)

Run the command -- mount | grep cgroup on your host system, and if you see the all the mount entries as cgroup2 fs (instead of cgroup), then you wont be able to run run older OSs as containers on this host. If you try to force cgroup2 over cgroupv1, the following errors will occur -- 

Cannot determine cgroup we are running in: No such file or directory

Failed to allocate manager object: No such file or director

An e.g. of what happens in centos 7 on lxc.

For older systems which don't support cgroupv2, you’ll need cgroupv1 mounted in /sys/fs/cgroup/systemd on the host. There doesn't seems to be way to do this using lxc.mount.auto = ; so you’ve to use scripts (lxc.hook.mount). For this script to mount a cgroup (named X) in the guest, a cgroup named X must also be mounted on the host; this same cgroup will be made available to to the guest. Alternatively, you may mount –bind in this script from the host’s cgroupv1 mounted directory to the guest’s directory; this’s a better approach since this allows you to create cgroups inside X exclusively for the container, so the guest may not play around with other processes's cgroups.
As an e.g. –
#! /bin/bash
mount -t tmpfs -o size=1M tmpfs $LXC_ROOTFS_MOUNT/sys/fs/cgroup/
mkdir -p $LXC_ROOTFS_MOUNT/sys/fs/cgroup/systemd
#mount -t cgroup -o none,name=cgroupv1 cgroupv1 $LXC_ROOTFS_MOUNT/sys/fs/cgroup/systemd &>> /tmp/script_out.log
mount --bind /tmp/cgroup1/lxc_containers $LXC_ROOTFS_MOUNT/sys/fs/cgroup/systemd
exit 0

Can't get cgroupv1 mounted no your host? Getting "already mounted or mount point busy." -- in this case ensure the cgroup that you're mounting is not being attached to any subsystem/controllers, which is the default behavior. This's the right approach -- 

mount -t cgroup -o none,name=lxc_compat systemd /tmp/cgroup1

Thursday, November 19, 2020

Asus P1440FA-3410Z linux compatibility.

 This laptop in reality comes with Linux pre-installed (mine did); so is 100% linux compatible including the wifi.

Friday, November 6, 2020

Moto 3G (2015) (osprey) -- no audio from speaker or wifi.

I think this's a hardware issue.

To try and resolve the issue, make a call on mobile network and turn the speaker on. The issue must resolve.

Friday, October 16, 2020

[spreadsheet][ods]Unsprung/rotating mass (wheel/sprocket/tyre) power loss calculator for cars bikes and motorcycles

 In case you're wondering how much power will you get when you replace you wheel or sprocket or tyres to lighter ones, this spreadsheet is for you.

https://drive.google.com/file/d/1bM1nyAbg6gJ8RFpCKRujqXe6EF4voAlF/view?usp=sharing

Open in either libreoffice or google docs. 

Realize that the power loss is not only dependent on unsprung mass, but also on other factors such as wind resistance (your vehicle's aerodynamics), mechanical losses etc... unsprung mass is only one of the losses. These other losses changes over the speed in which you're at, so while calculating, apart from dimensions, you've to also enter the speed and the time required to reach that speed in order to determine the power lost because of the wheel/sprocket/tyre. Another reason why you need to enter the speed and time it takes to reach that speed is that power is a function of energy. So if your vehicle takes less time to reach a certain speed, the mass will take less time to attain that RPM, but ultimately will result in having the same energy. Thus, same energy attained in less time means more power taken by up the rotating mass while accelerating.

Only fill the required values in column B against the non-colored cells. The colored cells are calculated values.

Thursday, October 8, 2020

D-Link DWM-222 4G on Linux.

Will work on any new Linux distribution out of the box. No need to install the 'driver's.

In case yours is an old Linux distribution, just eject the detected corresponding cdrom device (/dev/sr0 or /dev/sr1, sr2 etc...) and a modem will be spawned which can be used just as a standard modem using your networkmanager or using wvdial.

In networkmanager or wvdial, just do not set the APN (or INIT3 string), the device will pick it up automatically. Older versions of networkmanager do not allow this, so you may face issues on it. In this caseu use wvdial with a high BAUD rate.

Thursday, September 10, 2020

Mystery high feaver (ranging from 99 to 103) comes and goes with extreme chills (sometimes)

One of my relatives (old) had this kind of mysterious fever. It used to go away in 3 days, and then used to come back within around 5 days. The first day, fever was high (like 103), then it used to reduce over the next 2 days. The fever was high promenantly at night.

'Modern' medicine and 'specialists' got stuck with lung infection and various tests which gave no results. The blood test results were erratic and inconsistent pointing to a mix of all diseases. This had been going on with 6 months.

Then he though of taking a remedy of alternative medicine based on Indian origin (something related to Yoga). The practitioner said this's a result of food allergy. Apart from giving medications, he a black and whitelist of foods to avoid and prefer.

And that was it ... fever was gone.

Tuesday, September 1, 2020

Matching encoded URLs using regexp/regular expressions (optionally in fail2ban).

Your regular expression can fail against attackers doing attacks by encoding their URLs; fail2ban will not detect those, neither your regular expression; But you can modify your regexpes to match these encoded URLs also even in mixed form (partly encoded, and partly not); create regular expressions to replace each character with something like -- 

(c|%63|%43)

Here I replace c with the above; this will match c, and it's capital and small form in encoded URLs. In fail2ban you need to replace the % with a %% -- 

(c|%%63|%%43)

So I write .php as -- 

(\.|%%2E)(p|%%70|%%50)(h|%%68|%%48)(p|%%70|%%50)

You may begin the regular expression with (?i) in fail2ban or define it as (?i:<your regexp>) elsewhere to ignore case of the character (so C and c are alike and %2e and %2E is also alike.

To convert URLs to their encoded form I've created a simple script -- 

#! /usr/bin/ruby
# Converts the input string to a regular expression which will match the string either in the URL encoded form or mixed or unencoded form and case insensitively
# First argument is the string.
input = ARGV[0].dup
input.gsub!(/a/,'(a|%61|%41)')
input.gsub!(/b/,'(b|%62|%42)')
input.gsub!(/c/,'(c|%63|%43)')
input.gsub!(/d/,'(d|%64|%44)')
input.gsub!(/e/,'(e|%65|%45)')
input.gsub!(/f/,'(f|%66|%46)')
input.gsub!(/g/,'(g|%67|%47)')
input.gsub!(/h/,'(h|%68|%48)')
input.gsub!(/i/,'(i|%69|%49)')
input.gsub!(/j/,'(j|%6A|%4A)')
input.gsub!(/k/,'(k|%6B|%4B)')
input.gsub!(/l/,'(l|%6C|%4C)')
input.gsub!(/m/,'(m|%6D|%4D)')
input.gsub!(/n/,'(n|%6E|%4E)')
input.gsub!(/o/,'(o|%6F|%4F)')
input.gsub!(/p/,'(p|%70|%50)')
input.gsub!(/q/,'(q|%71|%51)')
input.gsub!(/r/,'(r|%72|%52)')
input.gsub!(/s/,'(s|%73|%53)')
input.gsub!(/t/,'(t|%74|%54)')
input.gsub!(/u/,'(u|%75|%55)')
input.gsub!(/v/,'(v|%76|%56)')
input.gsub!(/w/,'(w|%77|%57)')
input.gsub!(/x/,'(x|%78|%58)')
input.gsub!(/y/,'(y|%79|%59)')
input.gsub!(/z/,'(z|%7A|%5A)')
input.gsub!(/\./,'(\.|%2E)')
input.gsub!(/-/,'(-|%2D)')
puts input

The first argument to this script will be your text input.