Unix

Long time no write. I’ve started multiple posts in the past 2 years but never had time to finish them as they were quite long. Finally, there is a quick fix/post for which I couldn’t find a solution somewhere out there, so it might be helpful.

I had to configure AIDE on an old RHEL 6 (x64) server that was kind of messed up, and right after starting to unlink previously linked libraries we encountered an error.

 /usr/sbin/prelink -ua
/usr/sbin/prelink: /usr/lib64/samba/libserver-role-samba4.so: Could not find one of the dependencies
/usr/sbin/prelink: /usr/pgsql-9.1/lib/libpq.so.5.4 is not present in any config file directories, nor was specified on command line 

After quick investigation we realized there was a library missing that libserver-role-samba4.so was depending on.

 ldd /usr/lib64/samba/libserver-role-samba4.so
        linux-vdso.so.1 =>  (0x00007ffc3caf2000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fb21fd20000)
        libsamba-debug-samba4.so => not found
        libc.so.6 => /lib64/libc.so.6 (0x00007fb21f98b000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fb22014c000)

Since the file was contained in the RPM it wasn’t actually missing, but I relized it was just not in the right place. I had to create a symlink in /usr/lib64/ to point to the file.

 ln -s  /usr/lib64/samba/libsamba-debug-samba4.so /usr/lib64/

Tried running prelink -ua again, and bam, another error.

/usr/sbin/prelink -ua
/usr/sbin/prelink: /usr/lib64/samba/libinterfaces-samba4.so: Could not find one of the dependencies

Again, the same issue:

 ldd /usr/lib64/samba/libinterfaces-samba4.so
        linux-vdso.so.1 =>  (0x00007ffdcffb3000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f779f998000)
        libreplace-samba4.so => not found
        libtalloc.so.2 => /usr/lib64/libtalloc.so.2 (0x00007f779f78a000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f779f3f6000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f779fdc5000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f779f1ee000)
        libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f779efb6000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f779edb2000)
        libfreebl3.so => /lib64/libfreebl3.so (0x00007f779ebaf000)

And a quick fix:

ln -s /usr/lib64/samba/libreplace-samba4.so /usr/lib64/

All good:

 ldd /usr/lib64/samba/libinterfaces-samba4.so
        linux-vdso.so.1 =>  (0x00007fff99f21000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f70d37e5000)
        libreplace-samba4.so => /usr/lib64/libreplace-samba4.so (0x00007f70d35e3000)
        libtalloc.so.2 => /usr/lib64/libtalloc.so.2 (0x00007f70d33d5000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f70d3041000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f70d3c12000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f70d2e39000)
        libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f70d2c01000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f70d29fd000)
        libfreebl3.so => /lib64/libfreebl3.so (0x00007f70d27fa000)

Now back to the original error, the second part.

....
/usr/sbin/prelink: /usr/pgsql-9.1/lib/libpq.so.5.4 is not present in any config file directories, nor was specified on command line 
...

ldd did not show any issues with this library, so the solution had to be something else. Turns out you had to add additional paths to prelink.conf to be able to properly unlink them.

echo "-l /usr/pgsql-9.1/lib/" >> /etc/prelink.conf

After all the issues with prelinking were resolved I was happy and finally ready to run /usr/sbin/aide –init, but it wasn’t long until I encountered another issue.

 /usr/sbin/aide --init
/usr/sbin/prelink: /usr/lib64/libqmf2.so.1.0.1: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libsigar.so: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libqpidmessaging.so.3.2.1: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libunistring.so.0.1.2: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libqpidclient.so.7.0.0: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libltdl.so.7.2.1: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /lib64/libcap-ng.so.0.0.0: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process

Even though we ran [-ua] unlink (undo) all libraries, apparently not all got unlinked. Next fix was odd, but pretty easy, just specify the libraries you need unlinked manually.

 /usr/sbin/prelink -ua /usr/lib64/libunistring.so.0.1.2  /usr/lib64/libqpidclient.so.7.0.0 /usr/lib64/libltdl.so.7.2.1 /lib64/libcap-ng.so.0.0.0 /usr/lib64/libqmf2.so.1.0.1 /usr/lib64/libsigar.so /usr/lib64/libqpidmessaging.so.3.2.1

Finally, AIDE was able to create the database successfully.

 /usr/sbin/aide --init

AIDE, version 0.14

### AIDE database at /var/lib/aide/aide.db.new.gz initialized.

A few days ago we installed Piwigo, an open source web based photo gallery software, and I can safely say a cool one.

But, one might wonder why does a company and let alone one in IT industry need a photo gallery management software. The answer is simple, there a lot of photos from all the new year parties and team buildings we need to manage 🙂

First issue I encountered  was that Piwigo does not have a built-in LDAP authentication and that is usually one of the basic requirements in corporate environment. Quick search revealed the “Ldap login” extension which unfortunately didn’t work at all.

Apache authentication came to my mind, and after a quick check, it turned out that Piwigo has support for Apache (http) authenticated users. You just need to enable it in the /piwigo_root_dir_include/config_default.inc.php file. Find the line apache_authentication and set it to true, like this: $conf[‘apache_authentication’] = true;

Now, we need to set http authentication in Apache. Easy enough, just create .htaccess file in root directory of piwigo with the following:

# Distinguished name of Bind user and password
AuthLDAPBindDN "CN=Your_CN,OU=Your_OU,DC=example,DC=com"
AuthLDAPBindPassword "secure_p@ssw0rd"

# LDAP URL and path to search for user
# To add multiple LDAP server for redundancy just separate them with space
AuthLDAPURL "ldap://dc1.example.com dc2.example.com/OU=Your_OU,DC=example,DC=com?sAMAccountName?sub?(objectClass=*)"

# Specify authentication type and auth provider
AuthType Basic
AuthName "Arbitraty instrcution text"
AuthBasicProvider ldap

# Allow any valid user 
require valid-user

Or allow a speciefic user…

require ldap-user "user.name"

… or even a group.

require ldap-group "CN=Your_CN,OU=Your OU,DC=example,DC=com"

On a Ubuntu 14.04 with LAMP packages installed I just needed to activate one additional Apache module – authnz_ldap. You can do that with one command a2enmod authnz_ldap, and don’t forget to restart Apache after that.

After the first login, the user will appear in Piwigo administration panel where you can set its permission level.

Cheers!

I had previous experience with awesome ZFS at my current company where I implemented two backup storage servers for our corporate services.

For home use I wanted to create a NAS with several network shares and with some data redundancy. I chose FreeNAS, it had everything I needed already implemented with a nice and sleek Web UI. Or at least that’s what I thought.

I wanted to create a RAID-Z pool with some percentage of my three hard drives for somewhat important data and use the rest as separate non-redundant (or even striped) zpool – like one would do on Windows Server with dynamic disks.  But, FreeNAS does not support creating zpools with different size of disks/partitions through its web interface. It always goes for the whole drive. Although, this is not that surprising – official ZFS documentation states that “The recommended mode of operation is to use an entire disk“. Maybe its not recommended, but it doesn’t mean it won’t work, and dare I say – smoothly.

So, we’ll have to get our hands dirty and do it manually.

First, we want to list all the hard drives on our system. We could do this by doing ls on /dev/ directory.

ls /dev/ada?

You could check if the drive is already partitioned with (where X is the number of the drive you want to check out):

gpart list adaX

Output will be something like “gpart: No such geom: adaX”, if the drive is new. And this means there is no partition table on the drive. In case that the drive is already partitioned, you would most likely want to delete its partitioning info by doing:

gpart destroy adaX

You can do -F to force most of the commands.

Now, we want to create new partition table for the drive with:

gpart create -s gpt adaX

Then, we want to add partitions with desired size and file system:

gpart add -s 500g -t freebsd-zfs adaX

Where “-s 500g” is for size of 500 gigabytes, and “-t freebsd-zfs” is for ZFS type of file system.

This will generate the first partition, usually named adaXp1. In my scenario, I added another partition with the remaining size of the drive. Repeat this for all the drives you want in your zpool.

Now, its time to create zpool, a pretty straightforward procedure.

zpool create poolname raidz ada1p1 ada2p1 ada3p1

zpool create secondpoolname ada1p2 ada2p2 ada3p2

Zpool will try to mount it under /mnt/poolname and that will fail if there is no directory with the poolname. Fine with me, because I want to continue using FreeNAS through its WebUI so I wouldn’t have to meddle with manual CIFS/NFS  configuration.

When you go to FreeNAS volume manager in WebUI there will be no zpools you just created. No worry though. Easiest way to get them mounted and imported in web interface is to go back to cli, export the pools with:

zpool export poolname

And use the Auto Import Volume feature in Storage tab in FreeNAS.

Extra – In case you need to do some maintenance of your ZFS pools you should do it only through cli, as web interface might give unpredictable or undesired results.

To replace a failed hard drive, recreate the partitioning scheme on the new drive and do:

zpool replace poolname adaXpX

Which will resilver/rebuild the ZFS pool.

Note that this is not a recommended way to utilize ZFS and may require additional manual steps in creating and mounting a swap partition(s) that is/are usually handled by FreeNAS. ZFS can be very memory intensive depending on the size and configuration, so take caution and use this configuration on your own risk.

One of our lab networks has access to internet only through SOCKS proxy provided by our contractor. That works fine in most cases, but not for OpenSUSE’s package manager (zypper) since there is practically no support for SOCKS proxies .

One easy and fast workaround is to setup a local HTTP proxy server that will redirect all traffic to specified parent SOCKS proxy. From what I’ve read, Squid doesn’t support SOCKS proxy parent, and honestly i didn’t want to go with it as it seemed like an overkill.

Simple solution was Polipo; small, fast and easy to setup proxy server that supports SOCKS parent proxy. RPM package was already available in SUSE’s repository, downloaded it on another machine, SCPed it to a OpenSUSE box, set a few things and viola.

For the quickest and simplest setup i added these three parameters in /etc/polipo/config file.

daemonise = true
socksParentProxy = "proxy.hostname.or.ip:proxyport"
socksProxyType = socks5

Run polipo. Optionally you can add Polipo to Cron so it will start with the system.

 

     Secure Shell or SSH is a highly versatile application layer network protocol used for secure communication between networked hosts (in Server/client model).   Designed as a replacement for telnet with Public-key cryptography  for data confidentiality on unsecured networks ie. Internet.
SSH is most popular on Unix like systems and used for remote administration, tunneling, TCP and X11 forwarding and even file transfer (SFTP and SCP).  This post will focus on SSH on windows as I mostly work with it,  and for me one of the most interesting features – the SSH tunneling / TCP forwarding.

 

Needed software

Most popular flavor on POSIX systems is OpenSSH, that includes ssh (the client),  sshd (the SSH server daemon),  scp, sftp and others.
On Windows: You can actually go with the same OpenSSH package under Cygwin (Unix-like environment for Microsoft Windows).
There are of course some Windows native servers and clients, notable:
KpyM Telnet/SSH Server, freeSSHd, the unbeatable PuTTY and its many forks with my favourite being KiTTY.
DD-WRT and Open-WRT feature Dropbear SSH server and client for its light use of resources.

 

Local port forwarding

Local port forwarding enables you to tunnel TCP traffic from your machine to ssh server or remote network that ssh server has access to.
SSH client  on your local machine listens on specified port and forwards all TCP traffic to the specified destination address and port.

For example: VNC Viewer (with traffic destined to localhost on port 5900 > SSH client listening on port 5900 and forwarding traffic to the specified IP and port on server side of the tunnel -> server ->  Other hosts that server has access to (optional).

 
Note that local port is arbitrary port number as long as you can specifiy it in software that you wish to tunnel.
 
Continue Reading

If you ever had machine with two lan cards that needs to have failover with for example each lan card connected to it`s own router with internet connection, then this article is for you.

While working in one company I had a request that two Cisco routers each needs to be connected to one lan card on the same machine and on the other side they are connected to one mobile operator using IPSec over GRE tunnel. I made the setup on Cisco routers and configure parameters for IPSec and GRE, but the problem starts when I want to access the machine from both sides. If you configure gateway in the normal way you will get only one router as default gateway and all the traffic form the machine will go through that gateway. But in this case you need the traffic that comes from router1 to send using router1 and from router2 to router2. This is done using policy routing. Following commands will configure routing table to route traffic to corresponding gateway:

ip rule add from 192.168.0.10 table uplink1
ip route add default via 192.168.0.1 dev eth0 table uplink1

ip rule add from 192.168.0.20 table uplink2
ip route add default via 192.168.0.2 dev eth1 table uplink2

ip route add default scope global nexthop via 192.168.0.2 dev eth1 weight 1 nexthop via 192.168.0.1 dev eth0 weight 1

First line defines policy that all traffic that comes from ip 192.168.0.10 (eth0) will use routing table uplink1, and second line adds default gateway 192.168.0.1 (router1) to table uplink1 using eth0. Same commands are for eth1 with corresponding IPs. Last line is important because we still don`t have default gateway in the main routing table. Using nexthop we can add several gateways and give them weight if we want to prioritize them or in this case give them the same weight tu use them equally. You can put this commands into /etc/rc.local if you want them to be executed everytime on start up.

In the end we forgot to edit /etc/iproute2/rt_tables and define tables. It should look something like this:

#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
32767 uplink1
32766 uplink2
#1 inr.ruhep

You can use commands like ip rule show, ip route show table uplink1, ip route and route to debug.