piatok 27. decembra 2019

How to setup NextCloud (NC) client on Ubuntu

This howto inspired by this article was tested on Ubuntu 16.04 LTS, but should also work on 18.04 LTS and other non-LTS versions.

Installation and startup of NC desktop sync client

sudo add-apt-repository ppa:nextcloud-devs/client
sudo apt update
sudo apt install nextcloud-client

mkdir ~/nextcloud.user@nextcloud.service # sync folder creation
nextcloud # setup authentication and choose created sync folder

Change sync folder icon

It can be useful for user to know, that NC sync folder is "special". We can do it by setting custom folder icon via folder properties in Nautilus. E.g. this icon can be used: /usr/share/icons/Humanity/places/48/folder-remote.svg.

Migrate Ubuntu/Unity/GNOME known folders to NC

mv ~/{Desktop,Documents,Downloads,Music,Pictures,Public,Templates,Videos} ~/nextcloud.user@nextcloud.service
ln -s ~/nextcloud.user@nextcloud.service/{Desktop,Documents,Downloads,Music,Pictures,Public,Templates,Videos} ~

After migration, symlinks to known folders in home directory preserve their icons, but not in NC sync folder, so they can be also changed as above, to correspond with their original locations.

How to upgrade Google Chrome on Ubuntu

Tested on Ubuntu 16.04 LTS (and should work also on 18.04 LTS and other non-LTS versions):

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
# on 32-bit system remove [arch=amd64] from:
sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get update
sudo apt-get upgrade google-chrome-stable

Inspired by this article.


streda 18. decembra 2019

WSL home directory migration to MS OneDrive

motivation

There were problems with using WSL on multiple computers (separate home directories) and accessing Google Drive from them (for details, see older posts in this blog, there are some problems and uncomfortable workarounds, when using Google Drive in read-write mode from WSL). Therefore created proof of concept how to have just single "centralized" home directory on OneDrive, accessed from multiple WSLs via C: mounted in WSL as /mnt/c.

setup

It is practical (but not mandatory) to have the same path to OneDrive folder on each Windows computer (unify user and home folder names in Windows, if feeling it that way), e.g.:

C:\Users\Richard\OneDrive\
what in WSL means:
/mnt/c/Users/Richard/OneDrive/

and it is also practical having all "known folders" (as "Desktop", "Documents", "Pictures", etc.) also migrated to OneDrive, to be more sure, that all your files are safely backed by cloud (another story).

Let's assume user and home folder names "Richard" in Windows and "richard" in WSL.

I have not tested following commands as written, just summarized what I did, and it may not be complete or precise enough, because there was a lot of tuning, so be careful and think before doing anything.

From WSL on all computers determined to having /home/richard/ centralized via OneDrive do this:

sudo ln -s /mnt/c/Users/Richard/OneDrive/ /home/richard2
sudo chown richard:richard /home/richard2
sudo mv /home/richard /home/richard_backup
sudo mv /home/richard2 /home/richard

Now you can start new WSL session and see, if your (or richard's) home directory is already placed into OneDrive. From now, you have your home folder accessible from any WSL, where you have your OneDrive and this "symlinking mount" in place.

(Note: it is also possible to change user name in Windows and there is more than one way to do it.)

migration

Migration of files and folders from your WSL (or other Linux/Unix) home directory can be very specific and differ case by case. Simplistic example:

cp /home/richard_backup/* /home/richard_backup/.* /home/richard

Maybe you will want to migrate only subset of all files and folders, and maybe you will want to do more sorting what to place to which OneDrive subfolder, because in this step you are integrating (merging) your your WSL home directory with your OneDrive, and you want to have an order, not chaos in your files, after that. Also be careful about risk of unwanted file replacements, resolve collisions before it's too late.

file permissions

One caveat is, that file permissions are not set correctly in WSL, and this mask hack in ~/.profile can be useful:

if [[ "$(umask)" = "0000" ]]; then
  umask 0022 # or umask 0027 or umask 0077 for enhanced confidentiality, further reading
fi

but it was not enough in this case and files were switched from originally non-executables to executables, without knowing exactly why. The consequence of executableness were also missing colors in terminal, because ~/.profile need not to be executable for Bash to execute it (counter-intuitive, but safer).

Mask applies to future permisions changes, but past permissions changes can be fixed e.g. this way:

# all permissions removal from all unauthorized:
chmod -R o-rwx /home/richard/

# (potentially dangerous, depending on the specific contents of OneDrive)
sudo find /home/richard/ -type f -exec chmod a-x {} +

# fix executability selectively:
chmod 750 /home/richard/Workspace/*/.git/hooks/{pre-commit,post-commit}

In order to avoid doing this every time changing WSL instance with potentially different UID, it is also practical to use the same UID according to /etc/passwd in every WSL instance. 

perl

When there is directory ~/.cpan in migrated folder, you may decide not to transfer it, but doing this instead of it:

perl -MCPAN -e shell
install Bundle::CPAN
reload index
reload cpan
exit

other hacks


štvrtok 5. decembra 2019

Hooking git to update blob ID ($Id$) on every commit automatically

Git does not update $Id$ placeholder in working copies automatically out-of-box. One way to achieved that, is described in this blog post. Lets assume, that your git-versioned project folder is your current working directory.

Set .gitattributes

First, ensure, that files, wich needs to have $Id$ populated with their current blob ID, have set ident attribute in .gitattributes file, e.g.:

*       ident whitespace export-subs

More information about .gitattributes.

Create post-commit hook

Create .git/hooks/post-commit file with this content:

#!/bin/bash
# change all files temporarily, to checkout them back, but with updated $Id$ (only that, which were changed before commit)
echo | tee --append *
git checkout *

Make this file executable, e.g.:

chmod 755 .git/hooks/post-commit

After this, each $Id$ occurence should be replaced with $Id: <blob_ID> $ in each file, whose file name matched pattern in .gitattributes with ident.

Maybe post-commit file exists, when applying this, and is serving some purpose already. In such case, you'll need to somehow integrate the script above into existing post-commit script.

Checking all $Id$ occurences in versioned files

grep '\$Id' *

More information about git post-commit hook. This "hack" was inspired by this post.

štvrtok 21. novembra 2019

Improving visibility of multiple hard-links of the same file

Command below color-differentiate files (in current working directory) having 2 to 9 hard-links (in whole filesystem) and prefixing all files (in current working directory) with command for finding all directory entries for particular inode.

Tested on WSL / Ubuntu 18.04:

ls -dali --time-style +"%Y-%m-%d %H:%M" * .* | sed 's/^/sudo find \/ 2> \/dev\/null -inum /g' | grep -e "[-xtTsS] [234567890] " -e ""

alias lnshow='ls -dali --time-style +"%Y-%m-%d %H:%M" * .* | sed "s/^/sudo find \/ 2> \/dev\/null -inum /g" | grep -e "[-xtTsS] [234567890] " -e ""'

echo "alias lnshow='ls -dali --time-style +\"%Y-%m-%d %H:%M\" * .* | sed \"s/^/sudo find \/ 2> \/dev\/null -inum /g\" | grep -e \"[-xtTsS] [234567890] \" -e \"\"'" >> ~/.profile

lnshow

streda 6. novembra 2019

Speeding up filesystem search indexing in WSL by keeping unnecessary drives unmounted

The Problem

Indexing search for locate was very slow, sometimes maybe infinetely. Idexing was starting up by sudo updatedb, but not finishing.

The Hypothesis

As seen via mount command in WSL, there were mounted several drives slowing down indexing search for locate:
  • C: (default system drive of Windows host) - needed sometimes
  • G: (drive from Google Drive File Stream) - however, not working correctly
  • multiple (?) N: drives (resulted from these last month experiments) - not needed already
I wanted to unmount and keep them unmounted.

The Solution

The /etc/wsl.conf was not existing initially in WSL (ls -al /etc/wsl.conf), so created it this way

sudo bash -c "echo [automount] >> /etc/wsl.conf"
sudo bash -c "echo enabled=false >> /etc/wsl.conf"

Restart WSL instance via Windows command-prompt (cmd.exe):

wsl --list --running
wsl --terminate "Ubuntu-18.04"ubuntu1804

Now we see in WSL via mount, that those Windows drives in WSL are not mounted, and sudo updatedb is remarkably faster.

Drives can be mounted and unmounted on demand - example:

sudo mount -t drvfs C: /mnt/c
sudo umount /mnt/c

More information about wsl.conf: https://devblogs.microsoft.com/commandline/automatically-configuring-wsl/.

utorok 8. októbra 2019

Windows Subsystem for Linux (WSL) + Google Drive mount

Motivation

There was a need to access files mounted to Windows 10 machine via Google Drive File Stream (GDFS, as G: drive) from WSL, to be able to work with them on the same machine with Linux tools like vim, bash, etc., not to be dependent on separate Linux machine, nor inefficiently installing those tools in Windows (some could work, some not without problems or at all), nor copy them after modifications from local to GDFS locations other way (manually).

Components

Solution consists of:
  • Windows 10 OS
  • drive G: mounted by Google Drive File Stream
  • OpenSSH server for Windows (running as Windows 10 service)
  • SFTP Net Drive
  • Windows Subsystem for Linux (WSL)
Tools used during implementation:
  • Windows PowerShell
  • WSL Bash

Hacks honorable to be mentioned

Steps

  1. install Ubuntu 18.04 LTS based WSL in Windows
  2. install Google Drive File Stream and connect to the service (mount as e.g. G:)
  3. install OpenSSH Server for Windows - optional feature (Settings > Apps > Manage optional features > Add a feature > OpenSSH Server > Install
  4. via Services management enable, set to start automatically & start OpenSSH Authentication Client and then via administrative PowerShell:
    • Start-Service ssh-agent
    • Start-Service sshd
    • Install-Module -Force OpenSSHUtils
  5. comment AuthorizedKeysFile in C:\ProgramData\ssh\sshd_config
  6. non-administrative PowerShell:
  7. (troubleshoot if needed) and restart OpenSSH Server
  8. download SFTP Net Drive and install it (+ register to start on OS startup), connect with authorized username to localhost (mount as e.g. N:), there are these alternatives:

  9. set installed program to start automatically after log-on (click on program in start menu, open file location, Windows+R: shell:startup, create shortcut of the program there)
  10. make directory symlink (if using SFTP Net Drive Free) or directory junction (if using SFTP Net Drive V2 or Full) on Windows from mounted user profile to G: - now G: can be accessed also from WSL as /mnt/n, but after mount in WSL:
  11. from cmd.exe: ubuntu
  12. mount N: from WSL: sudo mkdir /mnt/n; sudo mount -t drvfs N: /mnt/n

Further hacks

There were problems with perpetually changing inodes of files trying to write modifications by vim, therefore this workaround is currently in place:

cat ~/.vimrc

set nobackup
set backupcopy=yes
set noswapfile
set noundofile
set nowritebackup

Automatic mount of N: in WSL with owner's privileges:

tail -n 10 ~/.profile

if [ ! -d "/mnt/n/" ]; then
        sudo mkdir /mnt/n
fi

sudo mount -t drvfs -o uid=1001,gid=1001 N: /mnt/n

# mount some folder
if [ ! -d "SYMLINKED_FOLDER_NAME" ]; then
        rm -f SYMLINKED_FOLDER_NAME
        ln -s /mnt/n/GDrive/... SYMLINKED_FOLDER_NAME
fi

Requires user to be in admin or sudo group in /etc/group and this settings in /etc/sudoers (via visudo in WSL) - not very secure setup, but WSL is not "mission critical server" :) :

sudo cat /etc/sudoers

# Members of the admin group may gain root privileges
%admin ALL=(ALL) NOPASSWD: ALL

# Allow members of group sudo to execute any command
%sudo   ALL=(ALL:ALL) NOPASSWD: ALL

Conclusion

The purpose of this article is me to be able re-run these steps on another computers or user profiles and can be continually improved in the future, to be more exact.



streda 28. augusta 2019

Inspirations for open-source business

There are various ways, how to produce open-source and make profit at the same time. There are some inspirations:

Distributed/paralelized TSDB on PostgreSQL/TimescaleDB

There are one-week-old news from Timescale about their addition to rapid increase of possible throughput and data volume using their TimescaleDB. Click here.

Timescale's article illuminates also its taking into account so called CAP theorem.

štvrtok 22. augusta 2019

How to enable and use firewall management on CentOS/RHEL/Fedora/Docker (firewall-cmd)

Firewalld is daemon for management of IP filtering (firewalling) rules based on iptables.

How to install and enable it:

sudo yum install firewalld
sudo systemctl enable firewalld
sudo systemctl start firewalld

This tool can be used for simple opening or protecting TCP/UDP ports, or more detailed/sophisticated whitelisting or blacklisting via rich rules (based not only on TCP/UDP ports, but also source or destination IP address, etc.), on Linux system, and various other IP filtering configurations.

More detailed information can be found e.g. in this article from Mark Cunningham.

streda 21. augusta 2019

ZABBIX migration from version 2.4.8 to 4.2-PostgreSQL/TimescaleDB


Preconditions

  • old ZABBIX server 2.4(.8) + PostgreSQL DB (alternatively MySQL/MariaDB)
  • RHEL/CentOS 7 VM(s) for new ZABBIX server+DB+frontend
  • review TimescaleDB licensing

Installations on RHEL/CentOS 7


  • PostgreSQL/TimescaleDB 1.4
  • DB creation (just for new ZBX installation and its verification, but won't be used for migration):
    • sudo -u postgres psql
    • CREATE ROLE zabbix_server_db_login LOGIN PASSWORD '***********';
    • CREATE DATABASE zabbix_server_db OWNER zabbix_server_db_login;
    • \q
  • PHP 7.3
  • ZABBIX server+frontend 4.2 - leave the server shut down before next steps

Download & unpack ZABBIX 2.4.8 DB original schema to new DB server - deployment preparation


wget https://sourceforge.net/projects/zabbix/files/ZABBIX%20Latest%20Stable/2.4.8/zabbix-2.4.8.tar.gz

tar xvzf zabbix-2.4.8.tar.gz

cd zabbix-2.4.8/database/postgresql/

Export ZABBIX 2.4.8 instance DB data (without schema)


pg_dump -h old-ZBX-address -d old-ZBX-DB-name -U old-ZBX-DB-read-permitted-user-name --no-owner --no-privileges --data-only --exclude-schema repeatedly-custom-not-migrating-schemas -T repeatedly-custom-not-migrating-tables -T acknowledges -T alerts -T auditlog -T events -T service_alarms -T 'history*' -T 'trends*' > zabbix_server_db_DML.sql

In case of migration from non-PostgreSQL DB (e.g. MySQL/MariaDB), PostgreSQL-compatible dump needs to be created in this step (e.g. via mysqldump -c -e -t --compatible=postgresql --no-create-info --skip-quote-names --skip-add-locks zabbix > zabbix.dmp) - not tested when writing this article.

Deploy ZABBIX 2.4.8 instance DB data (with original schema) to new PostgreSQL server


Re-create DB for import:

sudo -u postgres psql -c "DROP DATABASE zabbix_server_db_imported;" -c "CREATE DATABASE zabbix_server_db_imported OWNER zabbix_server_db_login;"

Import original schema and instance data:

psql -d zabbix_server_db_imported -U zabbix_server_db_login -f schema.sql -f zabbix_server_db_DML.sql

Check & disable hosts


psql -d zabbix_server_db_imported -U zabbix_server_db_login

zabbix_server_db_imported=> SELECT status, COUNT(status) FROM hosts GROUP BY status ORDER BY status;



statuscount
0
count of enabled ZBX hosts
1
count of disabled ZBX hosts
3
count of all ZBX templates
5
count of all ZBX proxies


The same count as displayed for status=0 has to be updated in following step:

zabbix_server_db_imported=> UPDATE hosts SET status=1 WHERE status=0;

zabbix_server_db_imported=> \q

Start ZBX server (upgrade) & check log


tail -f /var/log/zabbix/zabbix_server.log

sudo service zabbix-server start


Check log for possible errors, warnings, exceptions and fix what needed. After a while, old ZBX DB should be upgraded according to new ZBX DB schema.

Install TimescaleDB into new ZBX DB


sudo -u postgres psql zabbix_server_db_imported

zabbix_server_db_imported=# CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

zabbix_server_db_imported=# \q


zcat /usr/share/doc/zabbix-server-pgsql*/timescaledb.sql.gz | psql zabbix_server_db_imported -U zabbix_server_db_login


For more details about TimescaleDB ZABBIX-supported deployment, read this.

streda 24. júla 2019

How to share mRemoteNG connections on more Windows computers via Google Drive

The situation

Using mRemoteNG on several Windows computers and not wanting making the same configurations of connections on each of them.

The setup

  • Windows 10 (Home, Pro, ...)
  • Google Drive (GDrive) synchronized via Drive File Stream

The procedure

  1. Move your %APPDATA%\mRemoteNG somewhere on your GDrive.
  2. Open command-line terminal (cmd.exe) as Administrator.
  3. Make symbolic directory link from original location of moved folder, e.g.:
C:\WINDOWS\system32>mklink /D "%APPDATA%\mRemoteNG" "G:\My Drive\***\***\AppData\Roaming\mRemoteNG"
symbolic link created for C:\Users\******\AppData\Roaming\mRemoteNG <<===>> G:\My Drive\***\***\AppData\Roaming\mRemoteNG

(Directory junction won't work instead of symbolic directory link, because it is working only in the scope of local NTFS drives.)

Repeat those steps on each Windows computer. Maybe (depending on your specific situation) you will need to do some merge of mRemoteNG connection configuration files.

How to find current external IP address from command line

curl ipecho.net/plain

pondelok 15. júla 2019

ZABBIX version 4.2 user & admin training

These days I am in the process of preparation of 2-day ZABBIX training with this partitioning:
  • day 1, part 1: ZABBIX v4.2 basic concepts & web frontend usage
  • day 1, part 2: ZABBIX v4.2 basic administration
  • day 2, part 1: ZABBIX v4.2 intermediate administration
  • day 2, part 2: ZABBIX v4.2 advanced administration
For slides for the training click here.

nedeľa 7. júla 2019

Blogger/Blogspot vs. Google Docs/Slides

I was trying to use this service for publishing technical articles, but there were various formatting problems (e.g. headers' sizes mismatch). Therefore I started to author content primarily via Google Docs (or Slides, Spreadsheets, etc.) as publicly shared online documents, and link them from this service.

Complete collection of existing and future articles should be found here.

piatok 5. júla 2019

ZABBIX + TimescaleDB (PostgreSQL 11) installation from Docker images

New article about installing and running ZABBIX using Docker containers published here.

If you don't want to read through whole genesis, final YAML file for docker compose can be downloaded from here.