Monday, March 4, 2024

Washermod vs contact frame.

I recently got a contact frame and replaced it with washermod -- benchmarked it and found no difference.

Monday, December 25, 2023

Secure openwrt WOL with no open ports (firewall/nat etc...)

The objective of this article is to achieve WOL in a setup where Internet access is behind a NAT or has a firewall which allows no open connections. We'll also cover the security aspect using purely iptables (instead of openwrt's built in firewall) -- this's particularly important since the openwrt installed on the router is outdated and it's discontinued (so it won't receive any security updates).

To achieve WOL, we'll be using a simple shell script which will periodically download a text file and check it's contents; for a certain value within the text file, it'll trigger a WOL for a certain hardware address. Here is the script -- 

#! /bin/ash
while [[ j != k ]]
do
    if test '<wol string>' = "$(wget -q -O - -U 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Edg/91.0.864.37' --no-check-certificate '<URL of text file>')"
    then
        /usr/bin/etherwake -D -i '<interface>' <hardware address of your system>
        sleep 30
    fi
done

For this you need to install the etherwake package.

<wol string> is the string written in the text file. For this string the WOL signal will be emitted. Therefore to disable WOL, you need to modify the text file to anything else other than this string.

<URL of text file> is a HTTP link. This may point to an s3 object which is a good candidate or any online office document (something hosted by google drive). Regardless, you must be directly able to download a text file using the link using wget.

<interface> is the interface via which your to-be-wol system is accessible.

Make a file /usr/bin/wol.sh, write the script there and -- 

chmod 755 /usr/bin/wol.sh

Add /usr/bin/wol.sh to the local startup script (found in luci in the startup page) as -- 

/usr/bin/wol.sh &

 And you're done!

Now for the firewall part. I've disabled the buitin firewall of openwrt because it was not working as expected -- 

service firewall disable

Reboot router.

Add the firewall rules -- 

iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT -o <router interface> -p icmp -s <router IP> -d <your system IP>,<default gateway IP>,255.255.255.255,<broadcast IP of your subnet> -j ACCEPT
iptables -A INPUT -i <router interface> -p icmp -s <your system IP>,<default gateway IP> -d <router IP>,255.255.255.255,<broadcast IP of your subnet>  -j ACCEPT
iptables -A INPUT -i <router interface> -p tcp -m conntrack --ctstate NEW,RELATED,ESTABLISHED --dport <ssh port of your router> -s <your system IP> -d <router IP> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -d <your system IP> -s <router IP> --sport <ssh port of your router> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p udp -m conntrack --ctstate NEW,RELATED,ESTABLISHED --dport 53 -d <DNs server IP> -j ACCEPT
iptables -A INPUT -i <router interface> -p udp -m conntrack --ctstate RELATED,ESTABLISHED --sport 53 -s <DNs server IP> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p udp -m conntrack --ctstate NEW,RELATED,ESTABLISHED --dport 123 -d <NTP server IP> -j ACCEPT
iptables -A INPUT -i <router interface> -p udp -m conntrack --ctstate RELATED,ESTABLISHED --sport 123 -s <NTP server IP> -j ACCEPT
iptables -A OUTPUT -o <router interface> -p tcp -m conntrack --ctstate NEW,RELATED,ESTABLISHED -d <list of public IPs> -s <router IP> -m multiport --dports 80,443 -j ACCEPT
iptables -A INPUT -i <router interface> -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -m multiport --sports 80,443 -s <list of public IPs> -d <router IP> -j ACCEPT
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

<your system IP> is the system using which you're accessing the router over SSH.

This system of rules assume you access the luci GUI over ssh tunneling which is recommended.

You need to change your ntp servers to something fixed -- otherwise most NTP server DNS has so many IPs behind it... Good luck finding such a service.

<list of public IPs> is the list of public IPs of the service provider hosting your text file which the WOL script will monitor. Best of luck finding that.

After ensuring you're not cut off ssh access (otherwise reboot and then reattempt to fix the firewall rules) -- 

iptables-save > /etc/custom-iptables

Then add to the local startup stript (via luci GUI) -- 

iptables-restore < /etc/custom-iptables

Test all desired functionality.

Tuesday, November 7, 2023

Washer mod results on an i3.

I noticed that the temps on my i3 (Alder lake) was pretty high for an i3. So I did a washer mod and calculated an approx 12 degree drop in temps. FYI.

Wednesday, July 19, 2023

Using the script command to record all your shell output and commands transparently.

 In your bashrc file (either /etc/bash.bashrc, or /etc/bashrc or /etc/bash/bashrc etc...) add the following lines by the very end -- 

if test -z "$script_running"; then export script_running=1; script -a <destination directory>`date +%s`.txt; exit; fi

AFTER creating <destination directory> -- this is the place where all your recordings will be placed.

Monday, June 19, 2023

Improving cooling of laminar cooler (on steroids, faster fans, mod/hack) by replacing it's stock fan.

The great thing about Intel's laminar coolers is that you can take the fan off by removing 4 screws -- 


Here I have it attached on the motherboard after removing the fan.

Now you can attach a much more powerful fan on it by using hot glue on the plastic clips (4 in no.; the thing that fixes the heat sink to the motherboard). If you wish to attach a smaller fan, you can stick it directly to the copper heat sink.

Hot glue sticks are good enough for the purpose and is easy to take off when the need arises. Here is the result -- 


Here I have a 92mm PWM server fan attached on the heat sink.

This resulted in 5 degree lower temps.

Wednesday, September 21, 2022

Promql query to get the average/max/min CPU utlization, network rate and memory

CPU utilization in the last 24 hours -- 

highest –
100 - min_over_time((avg without(cpu)(((node_cpu_seconds_total{mode=`idle`} - (node_cpu_seconds_total{mode=`idle`} offset 1m))/60*100)))[24h:1m])

lowest -- 

100 - max_over_time((avg without(cpu)(((node_cpu_seconds_total{mode=`idle`} - (node_cpu_seconds_total{mode=`idle`} offset 1m))/60*100)))[24h:1m])

 Average -- 

100 - ((avg without(cpu) (max_over_time(node_cpu_seconds_total{mode="idle"}[24h])) - avg without(cpu) (min_over_time(node_cpu_seconds_total{mode="idle"}[24h])))/86400*100)

Network upload/download rate (MBPS) for an interface in the last 24 hours -- 

Average -- 

((max_over_time(node_network_receive_bytes_total{device="team0"}[24h]) – min_over_time(node_network_receive_bytes_total{device="team0"}[24h]))/86400)/1024/1024

((max_over_time(node_network_transmit_bytes_total{device="team0"}[24h]) – min_over_time(node_network_transmit_bytes_total{device="team0"}[24h]))/86400)/1024/1024

Lowest -- 

min_over_time(((delta(node_network_receive_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

min_over_time(((delta(node_network_transmit_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

Highest -- 

max_over_time(((delta(node_network_receive_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

max_over_time(((delta(node_network_transmit_bytes_total{device="team0"}[1m])/60))[24h:1m])/1024/1024

Memory utilization (in %) in the last 24 hours -- 

Average -- 

avg_over_time((((node_memory_MemTotal_bytes)-(node_memory_MemAvailable_bytes))/node_memory_MemTotal_bytes*100)[24h:1m])

minimum -- 

min_over_time((((node_memory_MemTotal_bytes)-(node_memory_MemAvailable_bytes))/node_memory_MemTotal_bytes*100)[24h:1m])

maximum -- 

max_over_time((((node_memory_MemTotal_bytes)-(node_memory_MemAvailable_bytes))/node_memory_MemTotal_bytes*100)[24h:1m])

Memory utilization (in GB) in the last 24 hours --

Average -- 

avg_over_time(((node_memory_MemTotal_bytes/1024/1024/1024)-(node_memory_MemAvailable_bytes/1024/1024/1024))[24h:1m])

Minimum -- 

min_over_time(((node_memory_MemTotal_bytes/1024/1024/1024)-(node_memory_MemAvailable_bytes/1024/1024/1024))[24h:1m])

Maximum -- 

max_over_time(((node_memory_MemTotal_bytes/1024/1024/1024)-(node_memory_MemAvailable_bytes/1024/1024/1024))[24h:1m])

Monday, October 4, 2021

Ignoring XXX because its extensions are not built. Try: gem pristine…

 After trying out whatever tips and tricks that others have suggested, and this issue still doesn't resolve, this maybe a permission issue; that's why things might running as root.

And no -- it's not less permissions, it maybe related to MORE permissions -- for certain files, the group or others executable permission bits might have been set. To fix this -- 

find <gem paths> -type f -perm -u=x -exec chmod g+x,o+x {} +

find <gem paths> -type f -perm -u=rx -exec chmod g+rx,o+rx {} +

Of course if you're planning to use the gems system wide, all files and directories must be readable -- 

find <gem path> -type f -exec chmod o+r,g+r {} +; find <gem path> -type d -exec chmod o+rx,g+rx {} +


Sunday, June 6, 2021

Backporting gtk-gnutella on Debian buster.

 It seems Debian 10 does not have this package in the repository, but Debian unstable has. So we'll try building a deb for Debian buster -- 

aptitude install libdbus-1-dev libglib2.0-dev libgnutls28-dev=3.6.7-4+deb10u6 libgtk2.0-dev libxml2-dev zlib1g-dev fakeroot

apt-get source --compile gtk-gnutella

This'll result in the deb being generated. Install it -- 

dpkg -i gtk-gnutella_1.1.15-1_amd64.deb

Alternatively, you may download the deb directly -- 

https://drive.google.com/file/d/1YAMfQpgwWGWotwG7NZtRO-WNZMBHobCF/view?usp=sharing

Cleanup -- 

aptitude markauto libdbus-1-dev libglib2.0-dev libgnutls28-dev libgtk2.0-dev libxml2-dev zlib1g-dev fakeroot
apt-get autoremove

Debian buster -- Working VAAPI (hardware video decoding) for newer intel hardware (like ice lake/gen 11 intel GPU (UHD)).

In case you cannot get hardware video acceleration to work on your new Intel processor, apart from trying to install the backported kernel, you may also need a newer intel-media-va-driver (as of the current time 21.1.1 is the latest from testing).

In this article, it'll be shown how to backport these yourself (since no backports are available) from testing. Alternatively, you can find prebuild backports from here -- 

 https://drive.google.com/file/d/10rcxvetlJbe4wMUijficd-263S_QYhIj/view?usp=sharing

Extract and install all the debs (dpkg -i *.deb)

To test -- 

LIBVA_DRIVER_PATHS=/usr/lib/x86_64-linux-gnu/dri/ LIBVA_DRIVER_NAME=iHD vainfo

In case you want to build this yourself, take the following instructions -- 

Add the following to /etc/apt/sources.list -- 

deb http://mirror.csclub.uwaterloo.ca/debian-multimedia/ stable main
deb-src http://mirror.csclub.uwaterloo.ca/debian-multimedia/ stable main
#bullseye
deb http://mirror.csclub.uwaterloo.ca/debian/ bullseye main contrib non-free
deb-src http://mirror.csclub.uwaterloo.ca/debian/ bullseye main contrib non-free
deb http://security.debian.org/debian-security bullseye/updates main contrib non-free
deb-src http://security.debian.org/debian-security bullseye/updates main contrib non-free
deb http://mirror.csclub.uwaterloo.ca/debian/ bullseye-updates main contrib non-free
deb-src http://mirror.csclub.uwaterloo.ca/debian/ bullseye-updates main contrib non-free

#sid
deb http://mirror.csclub.uwaterloo.ca/debian/ sid main contrib non-free
deb-src http://mirror.csclub.uwaterloo.ca/debian/ sid main contrib non-free

Next install packages --

aptitude install debhelper=13.3.3~bpo10+1 dwz=0.13-5~bpo10+1 libdrm-dev libgl1-mesa-dev libwayland-dev libx11-dev libxext-dev libxfixes-dev pkg-config build-essential libset-scalar-perl

Generate debs to be installed -- 

apt-get source --compile libva=2.10.0-1

Install all the resulting debs -- 

dpkg -i libva-dev_2.10.0-1_amd64.deb libva-drm2_2.10.0-1_amd64.deb libva-glx2_2.10.0-1_amd64.deb libva-wayland2_2.10.0-1_amd64.deb libva-x11-2_2.10.0-1_amd64.deb libva2_2.10.0-1_amd64.deb

Install build-depends of intel-media-driver -- 

aptitude install debhelper=13.3.3~bpo10+1 dh-sequence-libva cmake libigdgmm-dev=20.4.1+ds1-1 libx11-dev pkg-config

Generate the debs -- 

apt-get source --compile intel-media-driver=21.1.1+dfsg1-1

And install the generated debs.

Cleanup -- 

aptitude markauto debhelper dwz libdrm-dev libgl1-mesa-dev libwayland-dev libx11-dev libxext-dev libxfixes-dev pkg-config build-essential libset-scalar-perl libva-dev libva-drm2 libva-glx2 libva-wayland2 libva-x11-2 libva2 dh-sequence-libva cmake libigdgmm-dev libx11-dev pkg-config

apt-get autoremove

Tuesday, May 25, 2021

Error: Server asked us to run CSD hostscan.

Anyconnect has provisions of a ‘CSD script’… via which basically a remote program which’ll be downloaded from the VPN server and will be executed on the host machine to gather information about it and to be sent to the server.

If a VPN server mandates running such a scan the following errors will come up –

"Error: Server asked us to run CSD hostscan."

For openconnect, you’ve to download external CSD scripts. There are 2 CSD scripts – which communicate to the VPN server either via post or by some other means.

https://gist.githubusercontent.com/l0ki000/56845c00fd2a0e76d688/raw/61fc41ac8aec53ae0f9f0dfbfa858c1740307de4/csd-wrapper.sh

The above is a script sends the collected info via non-POST means. Another official, openconnect CSD script sends it via POST. It’s called csd-post.sh. If you’ve used the wrong script, the following errors will occur –

"Refreshing +CSCOE+/sdesktop/wait.html after 1 second"

Repetitively.

In the above csd-wrapper.sh script, you’ve edit it and fill in your VPN host’s DNS name in an environment variable.

Switches to openconnect –

--csd-wrapper <path to CSD wrapper script>

--csd-user <user name> – Run the CSD script as this user.


Wednesday, March 17, 2021

Restricting access based on IP on NFS v4 with fsid=0

There’s a scenario when you want to restrict people from mounting things under a directory, for e.g. /home/test/ based on their IP address; but as you know the /etc/exports entry for /home/test/ which has fsid=0 must allow for Ips which is a superset of all other host entries in /etc/exports (and under /home/test); otherwise access will be denied for the other entries. Here you can use nocrossmnt. With nocrossmnt for the /etc/exports entry if you’ve mount –bind inside a directory X inside /home/test, the NFS server will not allow the client to descent into X unless you’ve another entry for X in /etc/exports and if it explicitly allows the client’s IP to mount it.


Thursday, February 4, 2021

Running older systems (which need cgroupv1) on systems running over cgroupv2 (systemd.unified_cgroup_hierarchy)

Run the command -- mount | grep cgroup on your host system, and if you see the all the mount entries as cgroup2 fs (instead of cgroup), then you wont be able to run run older OSs as containers on this host. If you try to force cgroup2 over cgroupv1, the following errors will occur -- 

Cannot determine cgroup we are running in: No such file or directory

Failed to allocate manager object: No such file or director

An e.g. of what happens in centos 7 on lxc.

For older systems which don't support cgroupv2, you’ll need cgroupv1 mounted in /sys/fs/cgroup/systemd on the host. There doesn't seems to be way to do this using lxc.mount.auto = ; so you’ve to use scripts (lxc.hook.mount). For this script to mount a cgroup (named X) in the guest, a cgroup named X must also be mounted on the host; this same cgroup will be made available to to the guest. Alternatively, you may mount –bind in this script from the host’s cgroupv1 mounted directory to the guest’s directory; this’s a better approach since this allows you to create cgroups inside X exclusively for the container, so the guest may not play around with other processes's cgroups.
As an e.g. –
#! /bin/bash
mount -t tmpfs -o size=1M tmpfs $LXC_ROOTFS_MOUNT/sys/fs/cgroup/
mkdir -p $LXC_ROOTFS_MOUNT/sys/fs/cgroup/systemd
#mount -t cgroup -o none,name=cgroupv1 cgroupv1 $LXC_ROOTFS_MOUNT/sys/fs/cgroup/systemd &>> /tmp/script_out.log
mount --bind /tmp/cgroup1/lxc_containers $LXC_ROOTFS_MOUNT/sys/fs/cgroup/systemd
exit 0

Can't get cgroupv1 mounted no your host? Getting "already mounted or mount point busy." -- in this case ensure the cgroup that you're mounting is not being attached to any subsystem/controllers, which is the default behavior. This's the right approach -- 

mount -t cgroup -o none,name=lxc_compat systemd /tmp/cgroup1

Thursday, November 19, 2020

Asus P1440FA-3410Z linux compatibility.

 This laptop in reality comes with Linux pre-installed (mine did); so is 100% linux compatible including the wifi.

Friday, November 6, 2020

Moto 3G (2015) (osprey) -- no audio from speaker or wifi.

I think this's a hardware issue.

To try and resolve the issue, make a call on mobile network and turn the speaker on. The issue must resolve.

Friday, October 16, 2020

[spreadsheet][ods]Unsprung/rotating mass (wheel/sprocket/tyre) power loss calculator for cars bikes and motorcycles

 In case you're wondering how much power will you get when you replace you wheel or sprocket or tyres to lighter ones, this spreadsheet is for you.

https://drive.google.com/file/d/1bM1nyAbg6gJ8RFpCKRujqXe6EF4voAlF/view?usp=sharing

Open in either libreoffice or google docs. 

Realize that the power loss is not only dependent on unsprung mass, but also on other factors such as wind resistance (your vehicle's aerodynamics), mechanical losses etc... unsprung mass is only one of the losses. These other losses changes over the speed in which you're at, so while calculating, apart from dimensions, you've to also enter the speed and the time required to reach that speed in order to determine the power lost because of the wheel/sprocket/tyre. Another reason why you need to enter the speed and time it takes to reach that speed is that power is a function of energy. So if your vehicle takes less time to reach a certain speed, the mass will take less time to attain that RPM, but ultimately will result in having the same energy. Thus, same energy attained in less time means more power taken by up the rotating mass while accelerating.

Only fill the required values in column B against the non-colored cells. The colored cells are calculated values.

Thursday, October 8, 2020

D-Link DWM-222 4G on Linux.

Will work on any new Linux distribution out of the box. No need to install the 'driver's.

In case yours is an old Linux distribution, just eject the detected corresponding cdrom device (/dev/sr0 or /dev/sr1, sr2 etc...) and a modem will be spawned which can be used just as a standard modem using your networkmanager or using wvdial.

In networkmanager or wvdial, just do not set the APN (or INIT3 string), the device will pick it up automatically. Older versions of networkmanager do not allow this, so you may face issues on it. In this caseu use wvdial with a high BAUD rate.

Thursday, September 10, 2020

Mystery high feaver (ranging from 99 to 103) comes and goes with extreme chills (sometimes)

One of my relatives (old) had this kind of mysterious fever. It used to go away in 3 days, and then used to come back within around 5 days. The first day, fever was high (like 103), then it used to reduce over the next 2 days. The fever was high promenantly at night.

'Modern' medicine and 'specialists' got stuck with lung infection and various tests which gave no results. The blood test results were erratic and inconsistent pointing to a mix of all diseases. This had been going on with 6 months.

Then he though of taking a remedy of alternative medicine based on Indian origin (something related to Yoga). The practitioner said this's a result of food allergy. Apart from giving medications, he a black and whitelist of foods to avoid and prefer.

And that was it ... fever was gone.

Tuesday, September 1, 2020

Matching encoded URLs using regexp/regular expressions (optionally in fail2ban).

Your regular expression can fail against attackers doing attacks by encoding their URLs; fail2ban will not detect those, neither your regular expression; But you can modify your regexpes to match these encoded URLs also even in mixed form (partly encoded, and partly not); create regular expressions to replace each character with something like -- 

(c|%63|%43)

Here I replace c with the above; this will match c, and it's capital and small form in encoded URLs. In fail2ban you need to replace the % with a %% -- 

(c|%%63|%%43)

So I write .php as -- 

(\.|%%2E)(p|%%70|%%50)(h|%%68|%%48)(p|%%70|%%50)

You may begin the regular expression with (?i) in fail2ban or define it as (?i:<your regexp>) elsewhere to ignore case of the character (so C and c are alike and %2e and %2E is also alike.

To convert URLs to their encoded form I've created a simple script -- 

#! /usr/bin/ruby
# Converts the input string to a regular expression which will match the string either in the URL encoded form or mixed or unencoded form and case insensitively
# First argument is the string.
input = ARGV[0].dup
input.gsub!(/a/,'(a|%61|%41)')
input.gsub!(/b/,'(b|%62|%42)')
input.gsub!(/c/,'(c|%63|%43)')
input.gsub!(/d/,'(d|%64|%44)')
input.gsub!(/e/,'(e|%65|%45)')
input.gsub!(/f/,'(f|%66|%46)')
input.gsub!(/g/,'(g|%67|%47)')
input.gsub!(/h/,'(h|%68|%48)')
input.gsub!(/i/,'(i|%69|%49)')
input.gsub!(/j/,'(j|%6A|%4A)')
input.gsub!(/k/,'(k|%6B|%4B)')
input.gsub!(/l/,'(l|%6C|%4C)')
input.gsub!(/m/,'(m|%6D|%4D)')
input.gsub!(/n/,'(n|%6E|%4E)')
input.gsub!(/o/,'(o|%6F|%4F)')
input.gsub!(/p/,'(p|%70|%50)')
input.gsub!(/q/,'(q|%71|%51)')
input.gsub!(/r/,'(r|%72|%52)')
input.gsub!(/s/,'(s|%73|%53)')
input.gsub!(/t/,'(t|%74|%54)')
input.gsub!(/u/,'(u|%75|%55)')
input.gsub!(/v/,'(v|%76|%56)')
input.gsub!(/w/,'(w|%77|%57)')
input.gsub!(/x/,'(x|%78|%58)')
input.gsub!(/y/,'(y|%79|%59)')
input.gsub!(/z/,'(z|%7A|%5A)')
input.gsub!(/\./,'(\.|%2E)')
input.gsub!(/-/,'(-|%2D)')
puts input

The first argument to this script will be your text input.

Monday, June 1, 2020

Nikon A900 review and issues/problems/drawbacks.

Everything about the camera is expected; for the size, it's the best that you can get at night photography (which is still deficient) as of 2019.

Before you buy, these are a few drawbacks --

1) The slowest shutter speed is 8 seconds in reality. No it's not 25; 25 seconds is given by some 'mode' which useless actually.
2) Black round bands are seen on the edges of the pictures sometimes. I think this is is because of the image stabilizer. Solution is to zoom in and then zoom out and soon it'll fix itself. This issue calls in for a warranty claim! And yes -- warranty has been claimed. The highly abused lens has now been replaced.
3) Autofocus is terrible! Even while shooting videos. And there's no manual focus to make matters worst. To provide a global e.g. just try to shoot moon so it's craters can be seen. This's not possible without fully zooming into the moon.
4) The camera hangs sometimes.
5) Wi-Fi picture transfer feature is non-standard. It requires a Windows 'driver'; so it wont work on Linux/BSD. I use P2P over USB instead.
6) Transferring pictures to phone over wifi is broken. So is remote photography (actually the phone never connects to the 'smart device' over wifi).
7) Battery charging is extremely slow.
8) Battery display has only 2 levels -- full, and low (that 50% mark is not, medium, it's low actually, and you're hardly going to get any backup beyond that).

On the very plus size, the IS is very good! Audio recording is great too!

Monday, August 12, 2019

Lineage/resurrection remix/Android: Audio stops when earphones are plugged in.

Sometimes, when you plug in your earphones, the audio stop coming (from the earphones), but the notification sounds continue to come. You've to restart your device multiple times to resolve the issue.

The corresponding logs in logcat --

08-11 22:41:36.375   315  8872 E qcvirt  :      [vendor/qcom/proprietary/mm-audio-noship/audio-effects/safx/android-adapter/qcvirt/qcvirt.c:477] Assertion fail: status == PPSUCCESS

Solution -- disable the equalizer in audioFX.

Sunday, May 12, 2019

Linux technologies (kernel, bash etc...) support for Windows -- for a better monopoly.

And when you start to think in modern times when Microsoft loves Linux and opensource; the question arises, is it really the truth? Does Microsoft really love opensource?

No, in fact Microsoft is still trying to enforce it's monopoly and support for opensource technologies makes it's monopoly stronger.

Reviewing from a few pages of history realize why Microsoft is a monopoly -- 
1) It keeps all protocols hidden
2) All technologies will be patented in the US (Microsoft tax)
3) It tires to hide the formats of files that their programs use and when they open up the format, the specs are not complete (to ensure only their programs are able to open their files) and ladened with patent warnings.

None of this has changed; but now Linux programs can run on Windows officially. So to the unsuspecting consumer -- Windows has the power to run both their propitiatory, cryptic and hidden Windows program and open Windows files along with Linux capabilities; so the obvious question is, why will it switch to Linux? So let the monopoly commence and be better.

Thursday, August 30, 2018

Fixing kernel: "unregister_netdevice: waiting for to become free. Usage count = "

This's a kernel bug which'll cause docker to hang and is triggered by you stopping a container (which possibly does not stop gracefully, i.e. does not respond to SIGTERM). The only solution is a reboot. It's speculated that this's a network namespace related problem and reproducible on all lxc/docker/rkt etc....

The thing that worked for me to reduce the probability of this bug is removing limits from the docker systemd service. Newer systemd has a default limit even if you didnt set it. Set LimitNOFILE=1048576, LimitNPROC=infinity, LimitCORE=infinity, TasksMax=infinity in docker systemd unit and this may just fix the issue; this also reduced the load average (CPU based).

Saturday, March 31, 2018

Bash history sanitize/cleaner.

instead of cleaning your bash history, this script will remove problematic history entries, thus sanitize it.


#! /bin/bash
# Without argument will print what it'll delete. If 1st argument is y, then it'll clean the history of the user.
# The regular expressions catch the good commands which are to be retained.
echo 'Would delete commands -- '
grep -vP --regexp='^[a-zA-Z0-9/./.#~>]' ~/.bash_history
grep -vP --regexp='^.{0,1000}$' ~/.bash_history
if test "$1" == 'y'
then
 grep -P --regexp='^[a-zA-Z0-9/.#~>]' ~/.bash_history | grep -P --regexp='^.{0,1000}$' > /tmp/bash_history_cleaned || exit
 mv /tmp/bash_history_cleaned ~/.bash_history
fi

Read the comments for how to get this to work.

The mysterious case of engine oil thinning (AKA oil sheering)

If you're someone who rides at high RPM and have a vehicle which's capable of going to high RPMs (6000+) your engine oil might be subject to a phenomenon called oil sheering which thins down your engine oil and makes it's grade lower. Bad quality engine oil means more sheering.

So it's better to check your engine oil for quality. Now question is what to check? Feel the viscosity of the engine oil on your fingers, and if it does not feel oily (and feels more watery), the engine oil is subjected to sheering and has thinned down.

For other aspects, the engine oil might be ok -- it wont smell burnt, will not leave a soot when you rub it and of course will not be excessively thick; but regardless, if it has thinned this much, it's time for a change, and next time switch to fully synthetic engine oil since engine oils must not thin like this at all.

Saturday, September 30, 2017

Understanding inner workings of crossdev.


The crossdev executable is going to install a toolchain in your host machine for a foreign architecture as regular ebuilds. The root of this foreign architecture (RF) will be placed in /usr/ where is in the same syntax as the -t switch to crossdev.
Now question is, where do you get the ebuild of the toolchain for the foreign architecture? crossdev creates a separate overlay (the directory of the overlay must be added to PORTDIR_OVERLAY in make.conf of the host machine) which contains packages within a newly created category (in that overlay) named ; the packages (belonging to the toolchain) in this overlay are basically symlinks to certain directories (belonging to the toolchain) on the host system. These packages within category only contains the essential components of the toolchain and are merged just like any other package. The result of the installation is that the toolchain for the foreign architecture is installed with the prefix (-); emerge is also installed in the same way.
As said before these executables (including emerge) operate assuming RF as the root dir. All configuration in RF will be respected by these commands which includes make.conf, package.use, package.keyword etc... this includes the overlays, but it appears that the gentoo portage tree of the host is always searched. Don't understand why, or the mechanism.
Executing this crossdev with the switches will start building the toolchain using the emerge command itself (unless you've passed some switches which present it from doing so).
These packages will populate a few things in /usr/.
After these have been build, you can either build @system thus create the installation from scratch without a stage3; however I dont think this'll work, so I would say you extract a stage 3 in /usr/ without overwriting the make.conf in it since it's a special make.conf which works with the toolchain command as installed on the host; however this make.conf is underoptimized, so I would say you merge both the make.conf with the crossdev specific parts commented out when actually running the system and when using crossdev, you toggle the crossdev parts.
crossdev will also make a make.profile but unfortunately it's symlinked to the wrong profile, overwrite it with the one in the stage3 tarballs and change it to your preferred profile.
Whatever change you do, remember that running crossdev again will overwrite those, so 1) back them up and 2) never run crossdev again. Upgrade these packages using your host's PM.
As of using eix in RF, use EIX_PREFIX= for that.

Tuesday, September 19, 2017

systemd-logind -- "Failed to start Login Service." on Debian 8

This issue comes on LXC when you enable dbus so you can communicate to systemd; don't know why this triggers.

You need to add the sys_resource capability top fix this.

Wednesday, August 9, 2017

Using curb with custom verb/methods, headers and body.

The Easy method of Curb does not allow arbitrary request methods to be made.

Instead of using Easy, you can use the Curl module. In curb's rubydoc, you'll see hardly any description of these methods, so I explain them here.

There are methods named after REST verbs in the Curl module and there's http method which allows you to send custom verbs. These methods also take a code block in which case it'll pass a newly created object (just like Easy.new) as the first argument to the body. All instance methods which applies to the object of Easy.new also apply to this object. Before existing the code block, the request will be made.

Sunday, July 30, 2017

Script to group files/directories to reach the closest desired/target size.

Program takes a list of files and directories as a set of last arguments and outputs the paths of each of the files/directories which has been selected to reach the target size (which is the first argument). Will ignore files which do not exist.

First argument -- target size in bytes.


Off the selected files, will print each of them in a new line with their size, separated with a tab in the same line. Specify units as KB, MB, GB, TB in the environment variable UNIT.


script -- 
https://paste.ubuntu.com/25206842/

Before you run, install run 'gem install ClosestSum' on your system as root. This'll install the
ClosestSum gem which's a library implementing the algorithm.

Monday, July 24, 2017

lxc-console prints duplicate characters (crippled/corrupt console).

This happen on Debian. After an upgrade to lxc 2.0.x, lxc-console does not work well. Produces duplicate characters, cannot log in because of the same reason.


In lxc-1.x you must have made getty@tty*.service somewhere in /etc/systemd/system/. They're no longer needed. Remove them to fix the issue.

Thursday, June 15, 2017

-flto-partition=balanced and -flto-partition=1to1 benchmark

I compiled grep and libpcre, once with balanced and once with 1to1 (and of course with -lto=4 in both the cases), and the result was that grep was 10% faster with balanced which makes sense too.

Friday, May 12, 2017

Persistent/resilient ssh sessions for unstable internet connections.

Long back SSH had introduced an experimental 'roaming' feature where changes to the ssh client IP resulted in resuming the session on the server regardless of the changed IP. This feature was never implemented on the server, rending it useless on the client but causing a vulnerability.

Instead of using roaming, a much better approach is using screen with shell scripting. This has serious advantages like resuming the session over a different client machine, the program running in foreground won't slow down even if the terminal (or Internet connection) is slow etc...

Just install screen on the server and run the following commands for a presistant session --

while [[ j != k ]]; do ssh -tt screen -r / -p 0; done

This'll reconnect on disconnecting. You can use tabs in screen and take multiple sessions over the same screen instance. Open the other tabs using --

while [[ j != k ]]; do ssh -tt screen -r / -p 1; done

while [[ j != k ]]; do ssh -tt screen -r / -p 2; done

while [[ j != k ]]; do ssh -tt screen -r / -p 3; done

For tabs numbered 1, 2, 3 etc...

I use Gentoo's default config for the screen on the server, it works great!

Friday, May 5, 2017

Incremental backup system of your Android app settings and your data.

Phone is an unreliable device which can always be stolen and get bricked regardless of how expensive the phone is or how must reliable the manufacture claims to be. This's primarily because of the fact that the storage cannot be detached from the phone and the storage's content is highly integrated to the phone's hardware.

So I've created a system to regularly backup your app data  in an incremental way -- so the old data gets retained and snapshot of the latest backups is also taken all using less space. You can restore all this data to a new phone or revert an older version of the data to your existing phone (maybe to get it unbricked without loosing all your settings).

Of course I know about Google's cloud backup, but in my experience it's unreliable, requires a lot of bandwidth and works only on select (Google only) apps. This works on all apps. I also know about adb backup and restore feature, but that also does not work on all apps.

This system requires sshelper app and it must run in the background all the time. You must configure key based login as specified in this (Public-key (passwordless) logins) tutorial. After configuring that, you can disable password based login and disable the 'keep device awake' checkbox to improve on the battery and security.

Other things that is requires is root access.

sshelper installs a busybox. You need to use the tar command cron command of that. The scripts I've deployed use exactly that --

Place this script in /system/bin/custom_data_backup.sh -- 

# backups data only if the latest one is less than 12 hours old
#! /system/bin/sh
SECONDS=$((12*60*60))
SD_CARD="Your sdcard mount point"
mkdir $SD_CARD/custom_backup
latest=`ls -tr $SD_CARD/custom_backup/ | tail -1`
if test \( -z "$latest" \) -o \( `date +%s` -gt $(($latest + $SECONDS)) \)
then
 cd /data/data && /data/data/com.arachnoid.sshelper/bin/tar -cpf $SD_CARD/custom_backup/`date +%s` *
fi

This can be done by the command (as root) --

 vim /system/bin/custom_data_backup.sh

And then pressing 'i' to got to edit mode. Then paste, make changes, then press ESC a few times and type ':x' (without the single quotes).

Modify SD_CARD variable to point to the mount point where your sdcard is mounted. Use the mount command to see the various mount points. One of these must be your SDcard. cd to that place and verify by looking at it's contents if it is indeed the place.

It happens that Android has a bug or a problem etc... the system call which these basic utilities use to seep for a certain period of time is inaccurate. This systemcall never returns when sleep is done for a long period of time. This system works around this problem.

Next you need to setup cron.

Run these commands root -- 

mkdir /data/data/com.arachnoid.sshelper/spool
vim /data/data/com.arachnoid.sshelper/spool/root

Now press i, then copy paste the following text -- 

*/30 * * * * custom_data_backup.sh

Then press ESC a few times and type ':x' (without the single quotes).

Then run -- 

vim /etc/init.d/99backup.sh

Now press i, then copy paste the following text -- 

#!/system/bin/sh
mount -o remount,rw /
ln -s /system/bin /bin
mount -o remount,ro /
/data/data/com.arachnoid.sshelper/bin/crond -c /data/data/com.arachnoid.sshelper/spool

Then press ESC a few times and type ':x' (without the single quotes).

Then run -- 

chmod 755 /etc/init.d/99backup.sh.

Install universal init.d and enable init.d scripts support. If you've a rom which has inbuilt support of init.d, you will not require this.

After this you must see backups created in directory /custom_backup. The file name is the timestamp of the date at which the backup was taken. These files will be created ~ every 12 hours.

Next configure your desktop system to take incremental backups. This's the script which runs a daemon and keeps running until the backups are complete. -- 

https://paste.ubuntu.com/24521288/

Modify the variables ($*) as per your needs and as per your phone.

$sdcard is the directory where backups are placed (that custom_backup directory) on your phone. It must be a full path.

$sshkey is the ssh key which you generated for key-based login to your phone.

$ip is the IP of your phone on the wi-fi network. You must configure your phone for static IP.

$backupDest is the path of your desktop system where the backups will be placed.

$sdcardInternal is the full path of your internal SDcard in your phone. $sdcardExternal is similar but for your external SDcard.

Your SDcards will be incrementally backed up too.

This script is resilient to failure -- so even if your phone is gone, it'll wait until your backup is complete after you've come back and your phone's presence is detected. Setup a cron job once a day which triggers this script. If you wanna know -- the script is released under GPL v3 license.

Tuesday, April 18, 2017

CSV to vcf/vcard converter (advanced edit android contact).

Spreadsheets are great! But only if you could use them to manage contacts, specially in your Android phone (or any standard compliant phone which can export to vcard).

So we got vcf2csv to do the conversion. This program will blindly convert all fields in the vcard (including the standard fields like version) to columns in the generated tab separated CSV. And that's just we want.

Now you got to convert it back to VCF so you can import it to your (standard compliant) phone. To do so, convert using the script (Released under Apache license :p) --


#!/usr/bin/ruby
require 'csv.rb'
header = nil
counter = 0
CSV.foreach(ARGV[0], { :col_sep ="\t", :quote_char ='!' }) {
 |row|
 if counter == 0
  header = row.dup
 else
  puts "BEGIN:VCARD"
  row.each_with_index {
   |data, index|
   if data != nil
    puts "#{header[index]}:#{data}"
   end
  }
  puts "END:VCARD"
 end
 counter += 1
}

First argument is the path of the tab separated CSV to convert. The output of the program is the converted VCARD. It simply converts the columns to vcard fields

Sunday, April 16, 2017

(semi)Static IPv6 for AWS.

EIP does not support IPv6, but you can still do it as long as you don't delete you VPC or make changes to your subnet.

In the Interface page of your EC2 instance, there's an option to add more IPv6 address (like you can do with IPv4 address); in fact, there maybe a default IPv6 address depending on if you opted for one.

You just add an IPv6 address to the interface -- you'll have control over it, you can remove it from one instance attach it to another (but only in the same subnet).

Saturday, April 15, 2017

Ultimate traffic shaping script (low prioritize/background your P2P/torrent/Bitcoin/gnutella/edonkey/emule traffic).


Sick of P2P traffic (like Bitcoin, torrents) hogging your Internet connection? This's the solution. It gives HTTP(S), SMPT, DNS, IMAP, POP etc... ports high priority over other ports.

It's to be realized that QoS works only at the point where the traffic is throttled. Since you have no control over your ISPs network throttle, you wont be able to get a working QoS unless you throttle your traffic manually on your local system and apply a QoS there. The same goes for incoming(ingress) and outgoing(egress) traffic.
Linux has the capability to do so via tc commands (belongs to the iproute2 package).
You need to fill in the variables to effectively get the script to work.
devspeed is the speed of your Internet connection, inetdev is the interface over which you get your internet connection. inetUspeed, inetspeed is your upload and download Internet speed.

The units are in K or M bits per second.

After filing up the variables, copy paste the commands to your root shell. If the commands result in errors, you can try and upgrade to a newer version of iproute2 and upgrade the kernel.

The script works well, but don't expect things like SSH to work like... in real time. You'll see considerable delay with these real time apps.

And yes, ICMP has not been given a high priority.

devspeed=100mbit
inetdev=eth1
inetUspeed=10000kbit
inetspeed=10000kbit
tc qdisc add dev $inetdev ingress
tc filter add dev $inetdev parent ffff: protocol ip prio 1 u32 match ip src 192.168.0.0/16 flowid 10:1
modprobe ifb numifbs=1
tc filter add dev $inetdev parent ffff: protocol ip prio 10 u32 match u32 0 0 flowid 11:1 action mirred egress redirect dev ifb0
tc qdisc add dev ifb0 root handle 1: cbq avpkt 1400b bandwidth $inetspeed
tc class add dev ifb0 parent 1: classid 1:1 cbq allot 1400b prio 0 bandwidth $inetspeed rate $inetspeed avpkt 1400 bounded isolated
tc filter add dev ifb0 parent 1: protocol ip prio 16 u32 match u32 0 0 flowid 1:1
tc qdisc add dev ifb0 parent 1:1 handle 2: cbq avpkt 1400b bandwidth $inetspeed
tc class add dev ifb0 parent 2: classid 2:1 cbq allot 1400b prio 1 rate $inetspeed avpkt 1400 maxburst 1000 bandwidth $inetspeed
tc class add dev ifb0 parent 2: classid 2:2 cbq allot 1400b prio 8 rate $inetspeed avpkt 1400 maxburst 1 bandwidth $inetspeed
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 443 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 80 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 25 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 143 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 993 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 465 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 8080 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 1 u32 match ip sport 53 0xffff flowid 2:1
tc filter add dev ifb0 parent 2: protocol ip prio 10 u32 match u32 0 0 flowid 2:2
ip link set up dev ifb0
tc qdisc add dev $inetdev root handle 1: cbq avpkt 1400b bandwidth $devspeed
tc class add dev $inetdev parent 1: classid 1:1 cbq allot 1400b prio 0 bandwidth $devspeed rate $devspeed avpkt 1400
tc class add dev $inetdev parent 1: classid 1:2 cbq allot 1400b prio 0 bandwidth $inetUspeed rate $inetUspeed avpkt 1400 bounded maxburst 1 bandwidth $inetUspeed
tc filter add dev $inetdev parent 1: protocol ip prio 1 u32 match ip dst 192.168.0.0/16 flowid 1:1
tc filter add dev $inetdev parent 1: protocol ip prio 10 u32 match u32 0 0 flowid 1:2
tc qdisc add dev $inetdev parent 1:2 handle 2: cbq avpkt 1400b bandwidth $inetUspeed
tc class add dev $inetdev parent 2: classid 2:1 cbq allot 1400b prio 1 rate $inetUspeed avpkt 1400 maxburst 1000 bandwidth $inetUspeed
tc class add dev $inetdev parent 2: classid 2:2 cbq allot 1400b prio 8 rate $inetUspeed avpkt 1400 maxburst 1 bandwidth $inetUspeed
tc filter add dev $inetdev parent 2: protocol ip prio 1 u32 match ip sport 443 0xffff flowid 2:1
tc filter add dev $inetdev parent 2: protocol ip prio 1 u32 match ip sport 80 0xffff flowid 2:1
tc filter add dev $inetdev parent 2: protocol ip prio 1 u32 match ip sport 8080 0xffff flowid 2:1
tc filter add dev $inetdev parent 2: protocol ip prio 1 u32 match ip sport 65111 0xffff flowid 2:1
tc filter add dev $inetdev parent 2: protocol ip prio 10 u32 match u32 0 0 flowid 2:2

Saturday, March 25, 2017

Unique/Similar links/URLs grouper/sorter

This's a Ruby Library (and an accompanying app) --

https://rubygems.org/gems/LinkGrouper

Which group similar links/URLs (or find unique links) and writes them to separate files

Saturday, February 25, 2017

Awk vs gawk vs ruby benchmark.

The input file contains lines start with a number or anything else. When a start with a number, it only contains 2 numbers space separated. Output is the summation of the 2 numbers; lines starting with anything other than numbers will be ignored. Some sample lines --

720 7
256 1
4 4
5 7
a578dc953fd09cc6
55 3
f2d9d631d497c97e
cb6db932d9c9b6c2

Awk pattern --

'/^[0-9]/ { print $1+$2 }'

Ruby script --

#! /usr/bin/ruby
ARGF.each {
 |line|
 if line =~ /^([0-9]+) ([0-9]+)/
  puts $1.to_i | $2.to_i
 end
}

Results --

time gawk '/^[0-9]/ { print $1+$2 }' /tmp/awk_input.txt > /dev/null

real    0m10.224s
user    0m10.192s
sys     0m0.031s

time mawk '/^[0-9]/ { print $1+$2 }' /tmp/awk_input.txt > /dev/null

real    0m2.804s
user    0m2.769s
sys     0m0.032s

time ./bench.rb /tmp/awk_input.txt > /dev/null

real    0m36.886s
user    0m36.813s
sys     0m0.070s

So overall, mawk is 3.5 times faster than gawk and is 13 times faster than Ruby.

Script used to generate the input fie --

#! /usr/bin/ruby
require 'securerandom'
awkinput = IO.new(IO.sysopen("/tmp/awk_input.txt", 'a'))
9999999.times {
 writeme = SecureRandom.hex(8)
 if writeme =~ /^([0-9]+).*([0-9]+)/
  datawrite = "#{$1} #{$2}"
 else
  datawrite = writeme
 end
 awkinput.write(datawrite + "\n")
}

Sunday, February 5, 2017

Block device tester

I made this script to test block devices. First argument is the block device to test.
#! /usr/bin/ruby 
# Will quit in case some corrupt blocks are found and will print which position (from the offset) was a corrupt block found.
# First arg -- the block device.
require "securerandom"
require 'digest'
# Block size -- no. of Bytes to write at a time. Script will consume this much memory.
Bs = 9*1024*1024
Multiplyer = 6
# Returns random data of size bs. multiplyer specifies over how much interval to repeat the random data. The data drawn from the random no. generator will be bs/multiplyer
def getRandom(multiplyer, bs)
 randomDataUnit = (bs.to_f/multiplyer.to_f).ceil
 randomData = SecureRandom.random_bytes(randomDataUnit)
 randomData *= multiplyer
 if randomData.bytesize > bs
  randomData = randomData.byteslice(0, bs)
 end
 return randomData
end

# Open device
devwio = IO.new(IO.sysopen(ARGV[0], File::WRONLY|File::BINARY|File::SYNC))
devrio = IO.new(IO.sysopen(ARGV[0], File::RDONLY|File::BINARY|File::RSYNC))
devrio.sync = true
devwio.sync = true

# Calculate no. of blocks to write
devsize = `blockdev --getsize64 #{ARGV[0]}`.to_i
writeBlocks = (devsize.to_f/Bs.to_f).floor

# Write those blocks while testing
writeBlocks.times {
 data = getRandom(Multiplyer, Bs)
 devwio.write(data)
# TODO -- Move if to seperate function
 if (Digest::SHA1.digest data) != (Digest::SHA1.digest devrio.read(Bs))
  puts "\nData verification failed from #{devrio.pos-Bs} to #{devrio.pos}"
 else
  100.times {
   print "\x8"
  }
  print "Progress -- #{devrio.pos/1024/1024}MB"
 end
}
# Handel remaining blocks.
data = getRandom(1, devsize-(writeBlocks*Bs))
devwio.write(data)
# TODO -- Move if to seperate function
if (Digest::SHA1.digest data) != (Digest::SHA1.digest devrio.read)
  puts "\nData verification failed from #{devrio.pos-Bs} to #{devrio.pos}"
else
 100.times {
  print "\x8"
 }
 print "Last #{devrio.pos/1024/1024}MB"
end
puts
devwio.close
devrio.close

Sunday, November 20, 2016

ruby vs bash benchmark (loops comparison).

Bash relies highly on external commands which make it having a lot of overheads; it's the language of the command line, not a proper programming language; it'll work well when most of time is spend by external binaries.

time ruby bench.rb > /dev/null; time bash bench.sh > /dev/null

real    0m0.875s
user    0m0.869s
sys     0m0.007s

real    0m10.336s
user    0m10.111s
sys     0m0.223s

There's no comparison. ruby is magnitudes faster than bash

The scripts --

Bash --

#! /bin/bash
declare -i i
i=0
while test $i -le 999999
do
    echo hello world
    i=i+1
done

Ruby --

#! /bin/ruby
i = 0
while (i <= 999999)
    puts "hello world"
    i = i + 1
end

In the bash binary there's frequent execution of 2 independent binaries -- test and echo which makes it slow.

However, even if you do not use external commands, bash seems to be still slow --

time ./bash_for.sh 

real    0m4.361s
user    0m4.312s
sys     0m0.048s

time ./ruby_for.rb 

real    0m0.289s
user    0m0.090s
sys     0m0.035s

For scripts --
#! /usr/bin/ruby
tst = Array.new
999999.times {
 |k|
 tst[k] = k
}

and
#! /bin/bash
declare -i tst
for i in {0..999999}
do
 tst[$i]=$i
done


Thursday, November 17, 2016

nginx+fail2ban tutorial/document.

fail2ban + Nginx

In this system fail2ban is supposed to parse nginx logs (customized) for 404 and 403 status codes and add iptables rules to block IPs on the network layer from which excessive 404 and 403 are coming up.

Under a DDOS, because of the verity of IPs available, the frequency of banning and unbanning will be large, as a result there the iptables command will run too many times, resulting in an overhead. A system has been created to prevent this overhead even when there are 1000s of Ips being banned and unbanned.

Objective is to prevent overload of the application, brute force attacks by sending frequent failed authentication requests. 404s have also been taken care of to prevent path discovery apart from the same reasons as previously stated.

Architecture

Instead of the banning iptables being run directly by fail2ban, it's indirectly executed by a bash script on a cron job which runs a single iptables command to ban/unban any no. of IPs in bulk.

fail2ban runs as an unprivileged user, writes to files containing the IPs to be banned/unbanned which the script parses and bans/unbans them in bulk using a single execution of iptables command.

Implementation

Since this is done for testing purposes on a minimal local system (Gentoo) which runs a custom kernel (no iptables FILTER table support), a Debian VM will be created which will contain the actual implementation of the project.

Hits to the VM will be done from the base machine.

Prepare VM --

$ cat /etc/gentoo-release

Gentoo Base System release 2.3

Create rootfs image from template --

qemu-img create -f qcow2 -o cluster_size=512,lazy_refcounts=on,backing_file=Debian8NetworkedSSHRepoPackagesEnhancedUpdate.qcow Debian8NetworkedSSHRepoPackagesEnhancedUpdate_fail2ban.qcow 20G

Load KVM modules (not loaded because of minimum and highly customized OS) --

modprobe kvm_intel

Create tap device veth for the VM to connect to the base machine --

modprobe tun;ip tuntap add mode tap veth

Assign ipv6 and ipv4 addresses on a temporary basis --

ip a add fc00::1:1/112 dev veth;ip link set dev veth up

ip a add 192.168.3.1/24 dev veth

Enable KSM --

echo 1 > /sys/kernel/mm/ksm/run

echo 30000 > /sys/kernel/mm/ksm/sleep_millisecs

Start VM –

qemu-system-x86_64 -machine accel=kvm,kernel_irqchip=on,mem-merge=on -drive file=/home/de/large/VM_images/Debian8NetworkedSSHRepoPackagesEnhancedUpdate_fail2ban.qcow,id=centos,if=ide,media=disk,cache=unsafe,aio=threads,index=0 -vnc :1 -device e1000,id=ethnet,vlan=0 -net tap,ifname=veth,script=no,downscript=no,vlan=0 -m 512 -smp 4 -daemonize -device e1000,id=inet,vlan=1,mac=52:54:0F:12:34:57 -net user,id=internet,net=192.168.2.0/24,vlan=1

Login to the VM –

$ ssh root@fc00::1:2

root@fc00::1:2's password:



The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.



Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

Last login: Thu Nov 17 12:03:07 2016 from fc00::1:1

root@LINUXADMIN:~#

Configure ipv4 address for the VM

In /etc/network/interfaces –

source /etc/network/interfaces.d/*



# The loopback network interface

auto lo

iface lo inet loopback



# The primary network interface

allow-hotplug eth0

iface eth0 inet6 static

address fc00::1:2

netmask 112

#gateway fc00::1

# dns-* options are implemented by the resolvconf package, if installed

#dns-nameservers fc00::1

#dns-search LinuxAdmin



iface eth0 inet static

address 192.168.3.2

netmask 24



auto eth1

iface eth1 inet dhcp

Bring up the changes via console –

ifdown eth0; ifup eth0

Setup nginx –

This setup is just for testing.

aptitude install nginx

The following NEW packages will be installed:

fontconfig-config{a} fonts-dejavu-core{a} geoip-database{a} libfontconfig1{a} libgd3{a} libgeoip1{a} libjbig0{a}

libjpeg62-turbo{a} libtiff5{a} libvpx1{a} libxml2{a} libxpm4{a} libxslt1.1{a} nginx nginx-common{a} nginx-full{a}

sgml-base{a} xml-core{a}

0 packages upgraded, 18 newly installed, 0 to remove and 27 not upgraded.

Need to get 6,076 kB of archives. After unpacking 16.7 MB will be used.

Do you want to continue? [Y/n/?]



systemctl enable nginx

Synchronizing state for nginx.service with sysvinit using update-rc.d...

Executing /usr/sbin/update-rc.d nginx defaults

Executing /usr/sbin/update-rc.d nginx enable

root@LINUXADMIN:~# systemctl start nginx

Setup virtualhost –

rm /etc/nginx/sites-enabled/default

Create /etc/nginx/conf.d/default.conf

server {

listen *:8080;

root /home/docroot;

}

Setup custom log format for nginx as per requirement, tune it as per VM specs –

user www-data;

worker_processes 1;

pid /run/nginx.pid;



events {

worker_connections 768;

# multi_accept on;

}



http {



##

# Basic Settings

##



sendfile on;

tcp_nopush on;

tcp_nodelay on;

keepalive_timeout 65;

types_hash_max_size 2048;

# server_tokens off;



# server_names_hash_bucket_size 64;

# server_name_in_redirect off;



include /etc/nginx/mime.types;

default_type application/octet-stream;



##

# Logging Settings

##



log_format custom "[$time_local] $remote_addr $status $request";

access_log /var/log/nginx/access.log custom;

error_log /var/log/nginx/error.log;



##

# Gzip Settings

##



gzip on;

gzip_disable "msie6";



# gzip_vary on;

# gzip_proxied any;

# gzip_comp_level 6;

# gzip_buffers 16 8k;

# gzip_http_version 1.1;

# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;



##

# Virtual Host Configs

##



include /etc/nginx/conf.d/*.conf;

include /etc/nginx/sites-enabled/*;

}

Test nginx and start –

nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

systemctl start nginx

Setup fail2ban –

aptitude install fail2ban

The following NEW packages will be installed:

fail2ban file{a} libmagic1{a} libpython-stdlib{a} libpython2.7-minimal{a} libpython2.7-stdlib{a} mime-support{a}

python{a} python-minimal{a} python-pyinotify{a} python2.7{a} python2.7-minimal{a} whois{a}

0 packages upgraded, 13 newly installed, 0 to remove and 0 not upgraded.

Need to get 4,687 kB of archives. After unpacking 20.6 MB will be used.

Do you want to continue? [Y/n/?]

Configure fail2ban to start as unprivileged user –

mkdir /var/fail2ban

useradd -G adm fail2ban

chown fail2ban /var/fail2ban

Group adm is to allow fail2ban to read nginx access logs.

Allow fail2ban user to write logs –

chown fail2ban /var/log/fail2ban.log

Modify fail2ban logrotation config to create new empty log files with the correct permission –

/var/log/fail2ban.log {



weekly

rotate 4

compress



delaycompress

missingok

postrotate

fail2ban-client flushlogs 1>/dev/null

endscript



# If fail2ban runs as non-root it still needs to have write access

# to logfiles.

# create 640 fail2ban adm

create 640 fail2ban adm

}

Create /etc/fail2ban/fail2ban.local to make changes to allow running as the unprivileged user –

[Definition]

socket = /var/fail2ban/fail2ban.sock

pidfile = /var/fail2ban/fail2ban.pid

Make changes to /etc/default/fail2ban –

FAIL2BAN_USER="fail2ban"

Start and enable fail2ban –

systemctl start fail2ban

systemctl enable fail2ban

Synchronizing state for fail2ban.service with sysvinit using update-rc.d...

Executing /usr/sbin/update-rc.d fail2ban defaults

Executing /usr/sbin/update-rc.d fail2ban enable

Create actions –

cat /etc/fail2ban/action.d/nginx.local

[Definition]

actionban = echo -n <ip>, >> /var/fail2ban/ban

actionunban = echo -n <ip>, >> /var/fail2ban/unban

As stated before, these actions append to a file containing the IPs to be banned/unbanned as CSV values (that's why >> has been used).

Create filters –

cat /etc/fail2ban/filter.d/nginx40{3,4}.local

[Definition]

failregex = ^\[ \+0530\] <HOST> 403 .*$

[Definition]

failregex = ^\[ \+0530\] <HOST> 404 .*$

The anchors (^, $) specify that the whole log has been considered.

Create the jail –

cat /etc/fail2ban/jail.local

[nginx_403]

filter = nginx403

logpath = /var/log/nginx/access.log

action = nginx

findtime = 30

maxretry = 5

bantime = 300

usedns = no

enabled = true



[nginx_404]

filter = nginx404

logpath = /var/log/nginx/access.log

action = nginx

findtime = 30

maxretry = 50

bantime = 120

usedns = no

enabled = true



[ssh]

enabled = false

Since ssh service was not a part of the project, but enabled in fail2ban by default on Debian, it has been disabled here.

Make fail2ban read the changes and verify status of jails –

fail2ban-client reload

fail2ban-client status

Status

|- Number of jail: 2

`- Jail list: nginx_404, nginx_403

Create iptables scripts to read files /var/fail2ban/ban, /var/fail2ban/unban and add iptables rules.

cat /usr/bin/fail2ban_iptables.sh

#! /bin/bash

PATH="$PATH:/sbin"

if test -e /var/fail2ban/ban

then

iptables -A INPUT -s `cat /var/fail2ban/ban | sed s/,$//` -j DROP

rm /var/fail2ban/ban

fi



if test -e /var/fail2ban/unban

then

iptables -D INPUT -s `cat /var/fail2ban/unban | sed s/,$//` -j DROP

rm /var/fail2ban/unban

fi

Changes to PATH environment variables are there since cron has a very minimal set of executable search paths.

Fix permissions of the file –

chmod 744 /usr/bin/fail2ban_iptables.sh

Make a cron job to execute the script as root –

root@LINUXADMIN:~# crontab -l | grep -v ^\#

* * * * * /usr/bin/fail2ban_iptables.sh

Testing –

2016-11-17 12:54:57,403 fail2ban.actions[1500]: WARNING [nginx_404] Ban 192.168.3.1

cat /var/fail2ban/ban

192.168.3.1,

After some time (once cron job runs) –

iptables -L

Chain INPUT (policy ACCEPT)

target prot opt source destination

DROP all -- 192.168.3.1 anywhere



Chain FORWARD (policy ACCEPT)

target prot opt source destination



Chain OUTPUT (policy ACCEPT)

target prot opt source destination

The same client on hitting the server –

wget --timeout 5 http://192.168.3.2/xyzz

--2016-11-17 12:55:04-- http://192.168.3.2/xyzz

Connecting to 192.168.3.2:80... failed: Connection timed out.

Retrying.



--2016-11-17 12:55:10-- (try: 2) http://192.168.3.2/xyzz

Connecting to 192.168.3.2:80... failed: Connection timed out.

Retrying.



--2016-11-17 12:55:15-- (try: 3) http://192.168.3.2/xyzz

Connecting to 192.168.3.2:80... failed: Connection timed out.

Retrying.



--2016-11-17 12:55:22-- (try: 4) http://192.168.3.2/xyzz

Connecting to 192.168.3.2:80... failed: Connection timed out.

Retrying.

After 2 minutes –

2016-11-17 12:56:57,541 fail2ban.actions[1500]: WARNING [nginx_404] Unban 192.168.3.1

cat /var/fail2ban/unban

192.168.3.1,

After some time (once cron job runs) –

Chain INPUT (policy ACCEPT)

target prot opt source destination



Chain FORWARD (policy ACCEPT)

target prot opt source destination



Chain OUTPUT (policy ACCEPT)

target prot opt source destination