I encountered a problem where certain messages being sent to our Graylog instance had fields that were larger than ElasticSearch / Lucene limit of 32kb, thus failing to be indexed because of that one field. Kind of wished that Graylog had more intelligent way to handle these… There are bunch of people that encounter problems like these, just search for any part of this error and you’ll see many complaints.

{"type":"illegal_argument_exception","reason":"Document contains at least one immense term in field=\"Field_name\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[...]...', original message: bytes can be at most 32766 in length; got 32773","caused_by":{"type":"max_bytes_length_exceeded_exception","reason":"max_bytes_length_exceeded_exception: bytes can be at most 32766 in length; got 32773"}}

After a while of searching for a solution, there was none. Not a ready to use solution at least. Graylog support suggested splitting the field on several forum posts without specific instructions. So, I spent some amount of time and figured one way to do it using substring function (described here).

The rule I’ve created will first check if the fields you want to deal with even exist, no need to process the rule and waste CPU cycles if the field is absent. Then the rule will generate additional fields for any data beyond 32KB and named them “_continued_X” up to 3 new fields each of 32KB. Totaling 4 fields in total for a maximum of 128KB. Any fields smaller than 32KB will also be processed, but the field name and content will effectively stay the same.

Before you begin, make sure the field you are trying to split is properly parsed by Graylog (i.e. you have appropriately configured input and/or extractors). Then create a new pipeline or add to existing.

Create a new pipeline rule based on the following code, link the rule to proper streams and stages.

rule "Split_a_field_larger_than_32kb"
when
has_field("your_field_name")
then
let any_var_name = to_string($message.your_field_name);
remove_field("your_field_name");
set_field("your_field_name", substring(any_var_name, 0, 32766));
set_field("your_field_name_continued_1", substring(any_var_name, 32766, 65532));
set_field("your_field_name_continued_2", substring(any_var_name, 65532, 98298));
set_field("your_field_name_continued_3", substring(any_var_name, 98298, 131064));
end

Obviously, modify the rule to suite your environment, primarily the filed name. Variable name can be anything.
Amount of fields can also be reduced/increased the way you want.

I hope that this will save time for you.

Recently I’ve encountered a challenge of deploying Wazuh agent to bunch of Windows servers. Wazuh agent MSI package takes several parameters, and if given enough information it is able to register the agent, perform basic configuration and add itself to appropriate groups – all unattended. Generally this would be quite straightforward if old school startup scripts worked properly on Windows 2012. Unfortunately, they didn’t work for me.


After short amount of research I realized that simplest way to add parameters to a GPO based MSI installation is to use MSI transformations (MST files) that you can create with Orca.

Download Windows SDK (can be found here). During the setup process you can select MSI tools only, if you don’t need the rest of the tools. It will make for a quicker download.

This will only download Orca, you need to install it manually. I had to search through program files to realize that – I should have seen that this provides the “Download” button, not install. Anyway, Orca installer will be located at “C:\Program Files (x86)\Windows Kits\10\bin\10.0.18362.0\x86” (note that path might not be exact since I would assume the build version will only change in the future). Simply run Orca-x86_en-us.msi to install it. After that you should see Orca-esque icon in “C:\Program Files (x86)\Orca”.

Orca.exe will provide a handy graphical interface where you can edit a bunch of attributes of MSI files and generate MST that we need. Open up Wazuh agent MSI in Orca, and select new Transform.

Navigate to “Propery” table and right click whitespace, then select “Add Row”

Add all the properties that you need for your Wazuh Agent installation by repeating this process.
Make sure you use the correct names for the parameters. More information about deployment variables can be found on the official docs pages of Wazuh.

Generally, I would recommend testing the installation parameters manually before trying to create the MST. Simply run the Wazuh Agent MSI from the command line with all the parameters you plan to use. After you have successfully registered the server from the command line (without graphical interface), use the same parameters in Orca.

After you’ve added all the values simply click on “Generate Transform” from the “Transform” drop-down menu and save the MST file.

At this point you have everything you need to create custom GPO software deployment. Generate a new policy, or even add to existing. Click on “New” “Package” under “Computer Configuration” -> “Policies” -> “Software installation”.
Theoretically you can also use the “User Configuration” but for something like FIM you would want agent deployment to happen irregardless of the users.

This will prompt you to select the MSI file. Here you need to select the original Wazuh Agent MSI that is store on the network shared location (it needs to be accessible by computer objects that will receive the package)

Choose Advanced, so we can add the MST.


On the next screen you can make sure that default values are fine for your environment.

Then the crucial part is to add the MST file under “Modifications” tab. Keep in mind that MST file also needs to be accessible by the computer object.

Then apply the GPO to appropriate OU that contains your servers. You’ll probably have to reboot the servers for the actual deployment to occur.

This approach should be applicable to majority of MSI files, but your mileage may vary.

Long time no write. I’ve started multiple posts in the past 2 years but never had time to finish them as they were quite long. Finally, there is a quick fix/post for which I couldn’t find a solution somewhere out there, so it might be helpful.

I had to configure AIDE on an old RHEL 6 (x64) server that was kind of messed up, and right after starting to unlink previously linked libraries we encountered an error.

 /usr/sbin/prelink -ua
/usr/sbin/prelink: /usr/lib64/samba/libserver-role-samba4.so: Could not find one of the dependencies
/usr/sbin/prelink: /usr/pgsql-9.1/lib/libpq.so.5.4 is not present in any config file directories, nor was specified on command line 

After quick investigation we realized there was a library missing that libserver-role-samba4.so was depending on.

 ldd /usr/lib64/samba/libserver-role-samba4.so
        linux-vdso.so.1 =>  (0x00007ffc3caf2000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fb21fd20000)
        libsamba-debug-samba4.so => not found
        libc.so.6 => /lib64/libc.so.6 (0x00007fb21f98b000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fb22014c000)

Since the file was contained in the RPM it wasn’t actually missing, but I relized it was just not in the right place. I had to create a symlink in /usr/lib64/ to point to the file.

 ln -s  /usr/lib64/samba/libsamba-debug-samba4.so /usr/lib64/

Tried running prelink -ua again, and bam, another error.

/usr/sbin/prelink -ua
/usr/sbin/prelink: /usr/lib64/samba/libinterfaces-samba4.so: Could not find one of the dependencies

Again, the same issue:

 ldd /usr/lib64/samba/libinterfaces-samba4.so
        linux-vdso.so.1 =>  (0x00007ffdcffb3000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f779f998000)
        libreplace-samba4.so => not found
        libtalloc.so.2 => /usr/lib64/libtalloc.so.2 (0x00007f779f78a000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f779f3f6000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f779fdc5000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f779f1ee000)
        libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f779efb6000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f779edb2000)
        libfreebl3.so => /lib64/libfreebl3.so (0x00007f779ebaf000)

And a quick fix:

ln -s /usr/lib64/samba/libreplace-samba4.so /usr/lib64/

All good:

 ldd /usr/lib64/samba/libinterfaces-samba4.so
        linux-vdso.so.1 =>  (0x00007fff99f21000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f70d37e5000)
        libreplace-samba4.so => /usr/lib64/libreplace-samba4.so (0x00007f70d35e3000)
        libtalloc.so.2 => /usr/lib64/libtalloc.so.2 (0x00007f70d33d5000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f70d3041000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f70d3c12000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f70d2e39000)
        libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f70d2c01000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f70d29fd000)
        libfreebl3.so => /lib64/libfreebl3.so (0x00007f70d27fa000)

Now back to the original error, the second part.

....
/usr/sbin/prelink: /usr/pgsql-9.1/lib/libpq.so.5.4 is not present in any config file directories, nor was specified on command line 
...

ldd did not show any issues with this library, so the solution had to be something else. Turns out you had to add additional paths to prelink.conf to be able to properly unlink them.

echo "-l /usr/pgsql-9.1/lib/" >> /etc/prelink.conf

After all the issues with prelinking were resolved I was happy and finally ready to run /usr/sbin/aide –init, but it wasn’t long until I encountered another issue.

 /usr/sbin/aide --init
/usr/sbin/prelink: /usr/lib64/libqmf2.so.1.0.1: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libsigar.so: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libqpidmessaging.so.3.2.1: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libunistring.so.0.1.2: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libqpidclient.so.7.0.0: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /usr/lib64/libltdl.so.7.2.1: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process
/usr/sbin/prelink: /lib64/libcap-ng.so.0.0.0: at least one of file's dependencies has changed since prelinking
Error on exit of prelink child process

Even though we ran [-ua] unlink (undo) all libraries, apparently not all got unlinked. Next fix was odd, but pretty easy, just specify the libraries you need unlinked manually.

 /usr/sbin/prelink -ua /usr/lib64/libunistring.so.0.1.2  /usr/lib64/libqpidclient.so.7.0.0 /usr/lib64/libltdl.so.7.2.1 /lib64/libcap-ng.so.0.0.0 /usr/lib64/libqmf2.so.1.0.1 /usr/lib64/libsigar.so /usr/lib64/libqpidmessaging.so.3.2.1

Finally, AIDE was able to create the database successfully.

 /usr/sbin/aide --init

AIDE, version 0.14

### AIDE database at /var/lib/aide/aide.db.new.gz initialized.

I wanted to increase throughput to our file server based on Windows Server 2012, as it was getting hit pretty hard at peak hours. Of course, that’s much easier now when  Microsoft finally implemented built-in support for NIC teaming so I was very exited to try it out.

On the server side, everything can be done with few simple steps through GUI.

Just go to Server Manager and click on link beside NIC Teaming option or run LbfoAdmin.exe.

nic_teaming1

That will open up a NIC Teaming window, where you’ll see currently set up NIC teams and their statuses as well as adapters available for teaming.

nic_teaming1a

Select available adapters, right click your selection and choose Add to New Team.

On the next screen, enter arbitrary name for the NIC team, select/deselect wanted adapters and open up Additional properties to fine tune your NIC team.

For Teaming mode, choose LACP, and for Load balancing method chooseAddress hash. Load balancing based on address hash seemed most reasonable for machine that was serving multiple users simultaneously.

nic_teaming2

Note that, although Switch Independent NIC teaming sounds cool because it can be used on any switch, even those cheap consumer grade, it has its limitations. It will load balance only server outbound traffic, all inbound traffic will come through one server interface. That may even be useful in some scenarios where you have a lot of outbound traffic like web servers.

On the Cisco switch, in our case Catalyst 3750G, set:

Load balancing mode based on address in global configuration mode:

port-channel load-balance src-dst-ip

Create an interface for you port channel group:

interface Port-channel1

Add physical interfaces to port channel group in interface configuration mode with:

channel-group 1 mode active

and set channel protocol for them:

channel-protocol lacp



A few days ago we installed Piwigo, an open source web based photo gallery software, and I can safely say a cool one.

But, one might wonder why does a company and let alone one in IT industry need a photo gallery management software. The answer is simple, there a lot of photos from all the new year parties and team buildings we need to manage 🙂

First issue I encountered  was that Piwigo does not have a built-in LDAP authentication and that is usually one of the basic requirements in corporate environment. Quick search revealed the “Ldap login” extension which unfortunately didn’t work at all.

Apache authentication came to my mind, and after a quick check, it turned out that Piwigo has support for Apache (http) authenticated users. You just need to enable it in the /piwigo_root_dir_include/config_default.inc.php file. Find the line apache_authentication and set it to true, like this: $conf[‘apache_authentication’] = true;

Now, we need to set http authentication in Apache. Easy enough, just create .htaccess file in root directory of piwigo with the following:

# Distinguished name of Bind user and password
AuthLDAPBindDN "CN=Your_CN,OU=Your_OU,DC=example,DC=com"
AuthLDAPBindPassword "secure_p@ssw0rd"

# LDAP URL and path to search for user
# To add multiple LDAP server for redundancy just separate them with space
AuthLDAPURL "ldap://dc1.example.com dc2.example.com/OU=Your_OU,DC=example,DC=com?sAMAccountName?sub?(objectClass=*)"

# Specify authentication type and auth provider
AuthType Basic
AuthName "Arbitraty instrcution text"
AuthBasicProvider ldap

# Allow any valid user 
require valid-user

Or allow a speciefic user…

require ldap-user "user.name"

… or even a group.

require ldap-group "CN=Your_CN,OU=Your OU,DC=example,DC=com"

On a Ubuntu 14.04 with LAMP packages installed I just needed to activate one additional Apache module – authnz_ldap. You can do that with one command a2enmod authnz_ldap, and don’t forget to restart Apache after that.

After the first login, the user will appear in Piwigo administration panel where you can set its permission level.

Cheers!

I had previous experience with awesome ZFS at my current company where I implemented two backup storage servers for our corporate services.

For home use I wanted to create a NAS with several network shares and with some data redundancy. I chose FreeNAS, it had everything I needed already implemented with a nice and sleek Web UI. Or at least that’s what I thought.

I wanted to create a RAID-Z pool with some percentage of my three hard drives for somewhat important data and use the rest as separate non-redundant (or even striped) zpool – like one would do on Windows Server with dynamic disks.  But, FreeNAS does not support creating zpools with different size of disks/partitions through its web interface. It always goes for the whole drive. Although, this is not that surprising – official ZFS documentation states that “The recommended mode of operation is to use an entire disk“. Maybe its not recommended, but it doesn’t mean it won’t work, and dare I say – smoothly.

So, we’ll have to get our hands dirty and do it manually.

First, we want to list all the hard drives on our system. We could do this by doing ls on /dev/ directory.

ls /dev/ada?

You could check if the drive is already partitioned with (where X is the number of the drive you want to check out):

gpart list adaX

Output will be something like “gpart: No such geom: adaX”, if the drive is new. And this means there is no partition table on the drive. In case that the drive is already partitioned, you would most likely want to delete its partitioning info by doing:

gpart destroy adaX

You can do -F to force most of the commands.

Now, we want to create new partition table for the drive with:

gpart create -s gpt adaX

Then, we want to add partitions with desired size and file system:

gpart add -s 500g -t freebsd-zfs adaX

Where “-s 500g” is for size of 500 gigabytes, and “-t freebsd-zfs” is for ZFS type of file system.

This will generate the first partition, usually named adaXp1. In my scenario, I added another partition with the remaining size of the drive. Repeat this for all the drives you want in your zpool.

Now, its time to create zpool, a pretty straightforward procedure.

zpool create poolname raidz ada1p1 ada2p1 ada3p1

zpool create secondpoolname ada1p2 ada2p2 ada3p2

Zpool will try to mount it under /mnt/poolname and that will fail if there is no directory with the poolname. Fine with me, because I want to continue using FreeNAS through its WebUI so I wouldn’t have to meddle with manual CIFS/NFS  configuration.

When you go to FreeNAS volume manager in WebUI there will be no zpools you just created. No worry though. Easiest way to get them mounted and imported in web interface is to go back to cli, export the pools with:

zpool export poolname

And use the Auto Import Volume feature in Storage tab in FreeNAS.

Extra – In case you need to do some maintenance of your ZFS pools you should do it only through cli, as web interface might give unpredictable or undesired results.

To replace a failed hard drive, recreate the partitioning scheme on the new drive and do:

zpool replace poolname adaXpX

Which will resilver/rebuild the ZFS pool.

Note that this is not a recommended way to utilize ZFS and may require additional manual steps in creating and mounting a swap partition(s) that is/are usually handled by FreeNAS. ZFS can be very memory intensive depending on the size and configuration, so take caution and use this configuration on your own risk.

If you are like me using MDT 2012 Update 1, and recently decided to update old images with fresh install and latest Microsoft patches, there is a strong chance that you might run into following error during image deployment.Windows could not parse

Error is encountered during the processing of unattend.xml, more precisely on IE customization. It appears that IE10 does not support <IEWelcomeMsg>  tag  and that causes the whole deployment to hang. IEWelcomeMsg tag is present by default in unattend.xml file created by MDT 2012, so the solution is to either upgrade to MDT 2013 which has this issue resolved or to manually remove/comment this line.

You’ll find the unattend.xml file for each sequence under MDTDeploymentShare\Control\%Task Sequence ID%\

Just remove or comment the line like this:

<!-- <IEWelcomeMsg>false</IEWelcomeMsg> -->