FATAL: invalid value for parameter “TimeZone”: “America/Los_Angeles” with DBeaver

I got that message when trying to use DBeaver to connect to a PostgreSQL DB.

FATAL: invalid value for parameter "TimeZone": "America/Los_Angeles"

I found the fix here. The first thing to check is the timezone you have set on the Postgres db itself. You can do this by running “SELECT * FROM pg_timezone_names;”.

Here’s an example:

avenger_agent_prod=# SELECT * FROM pg_timezone_names;
name | abbrev | utc_offset | is_dst
------+--------+------------+--------
UTC | UTC | 00:00:00 | f
(1 row)

The fix is to make a small change the dbeaver.ini file in DBeaver root directory. If you installed this on a Mac, the file is in /Applications/DBeaver.app/Contents/Eclipse/

Just add:

-Duser.timezone=UTC
-startup
plugins/org.eclipse.equinox.launcher_1.4.0.v20161219-1356.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.551.v20171108-1834
-showsplash
# START: change jre version, not using the one in %JAVA_HOME%
-vm 
D:\ArPortable\Java\jdk1.8.0_171\jre\bin\server\jvm.dll
# END
# JVM settings
-vmargs
-XX:+IgnoreUnrecognizedVMOptions
--add-modules=ALL-SYSTEM
-Xms64m
-Xmx1024m
# time zone
-Duser.timezone=UTC
# language
-Duser.language=en

When you restart DBeaver, it should connect.

How to test open ports w/o telnet or nc

Found this out of necessity when a security team didn’t allow the installation of either telnet or nc. I initially thought ssh would work, but it doesn’t really work.

The command is simple. Just do this:

/dev/tcp/<host>/<port>

Replace the <host> and <port>. Here’s how it would look if successful:

SV-LT-1361:~ altonyu$ > /dev/tcp/192.168.0.11/2049
SV-LT-1361:~ altonyu$ echo $?
0

Here’s how it would look if unsuccessful:

SV-LT-1361:~ altonyu$ > /dev/tcp/192.168.0.11/2047
-bash: connect: Connection refused
-bash: /dev/tcp/192.168.0.11/2047: Connection refused
SV-LT-1361:~ altonyu$ echo $?
1

Obviously if the command hangs, it probably means it won’t work either.

Hope this helps someone!

Use update-ca-trust! Or update-ca-certificates.

Don’t just append the /etc/ssl/certs/ca-certificates.crt or the /etc/ssl/certs/ca-bundle.crt.

Not long ago, I thought that it didn’t matter. I figured since the update-ca-trust command just updated the bundle, I might as well skip a step and go directly. I was wrong. Don’t do it. I guess that’s why people actually have processes and directions to follow.

The files are not meant to be edited manually. They are generated by the update-ca-trust or update-ca-certificates commands, which scan the /etc/pki/ca-trust/source/anchors or /usr/local/share/ca-certificates directories for custom CAs, and then concatenate them with the system CAs into a single file. If you edit these files directly, your changes will be overwritten the next time these commands are run. This could mean that your changes would not survive a patching if the ca-certificates package is updated.

The reason why you want to put the certificate issuers in /etc/pki/ca-trust/source/anchors/ or /etc/ssl/certs/ and use the update-ca-trust enable/extract commands is so that it can survive an update. If someone decides to patch the machine and there are other certificates being updated, the one that you appended to the bundle will get deleted.

Follow the process! For me, that’s basically:

1. Copy your custom CA file (in PEM format) to the /etc/pki/ca-trust/source/anchors directory on Red Hat-based systems, or the /usr/local/share/ca-certificates directory on Debian-based systems. Make sure the file has a .crt extension.
2. Run the update-ca-trust or update-ca-certificates command as root. This will regenerate the /etc/ssl/certs/ca-certificates.crt or the /etc/ssl/certs/ca-bundle.crt file with your custom CA included.
3. Restart any services or applications that use SSL/TLS connections, such as web servers, browsers, curl, etc. They should now trust your custom CA.

How to clear the filesystem buffer cache in Linux

You would probably want to do this if you’re suspecting that there’s an issue where you’re falling short on memory, diagnosing performance issues, etc. It’s much faster than a reboot and if it solves your problem, you’ll know what to do in the future. It’s not likely that you’ll need to do this. Just good to know how.

If you’re interested in more information regarding your memory, you can go look at: https://www.linuxatemyram.com/

Taken from: https://stackoverflow.com/questions/29870068/what-are-pagecache-dentries-inodes

With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.

It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries–if the directory is there–list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.

The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables–anything that the OS could hold in memory from a file.

The commands flush these buffers.

Every Linux System has three options to clear cache without interrupting any processes or services.

  1. Clear PageCache only.
    • # sync; echo 1 > /proc/sys/vm/drop_caches
  2. Clear dentries and inodes.
    • # sync; echo 2 > /proc/sys/vm/drop_caches
  3. Clear PageCache, dentries and inodes.
    • # sync; echo 3 > /proc/sys/vm/drop_caches

Explanation of above commands. sync will flush the file system buffer. Commands separated by “;” run sequentially. The shell wait for each command to terminate before executing the next command in the sequence. As mentioned in kernel documentation, writing to drop_cache will clean cache without killing any application/service, command echo is doing the job of writing to file. If you have to clear the disk cache, the first command is safest in enterprise and production as “…echo 1 > ….” will clear the PageCache only. It is not recommended to use third option above “…echo 3 >” in production until you know what you are doing, as it will clear PageCache, dentries and inodes. Is it a good idea to free Buffer and Cache in Linux that might be used by Linux Kernel? When you are applying various settings and want to check, if it is actually implemented specially on I/O-extensive benchmark, then you may need to clear buffer cache. You can drop cache as explained above without rebooting the System i.e., no downtime required. Linux is designed in such a way that it looks into disk cache before looking onto the disk. If it finds the resource in the cache, then the request doesn’t reach the disk. If we clean the cache, the disk cache will be less useful as the OS will look for the resource on the disk. Moreover it will also slow the system for a few seconds while the cache is cleaned and every resource required by OS is loaded again in the disk-cache. Now we will be creating a shell script to auto clear RAM cache daily at 2am via a cron scheduler task. Create a shell script clearcache.sh and add the following lines. #!/bin/bash # Note, we are using “echo 3”, but it is not recommended in production instead use “echo 1” echo “echo 3 > /proc/sys/vm/drop_caches” Set execute permission on the clearcache.sh file. # chmod 755 clearcache.sh Now you may call the script whenever you required to clear ram cache. Now set a cron to clear RAM cache everyday at 2am. Open crontab for editing. # crontab -e Append the below line, save and exit to run it at 2am daily. 0 2 * * * /path/to/clearcache.sh For more details on how to cron a job you may like to check our article on 11 Cron Scheduling Jobs. Is it good idea to auto clear RAM cache on production server? No! it is not. Think of a situation when you have scheduled the script to clear ram cache everyday at 2am. Everyday at 2am the script is executed and it flushes your RAM cache. One day for whatsoever reason, may be more than expected users are online on your website and seeking resource from your server. At the same time scheduled script run and clears everything in cache. Now all the user are fetching data from disk. It will result in server crash and corrupt the database. So clear ram-cache only when required,and known your foot steps, else you are a Cargo Cult System Administrator. How to Clear Swap Space in Linux? If you want to clear Swap space, you may like to run the below command. # swapoff -a && swapon -a Also you may add above command to a cron script above, after understanding all the associated risk. Now we will be combining both above commands into one single command to make a proper script to clear RAM Cache and Swap Space. # echo 3 > /proc/sys/vm/drop_caches && swapoff -a && swapon -a && printf ‘\n%s\n’ ‘Ram-cache and Swap Cleared’ OR $ su -c “echo 3 >’/proc/sys/vm/drop_caches’ && swapoff -a && swapon -a && printf ‘\n%s\n’ ‘Ram-cache and Swap Cleared'” root After testing both above command, we will run command “free -h” before and after running the script and will check cache. Clear RAM Cache and Swap Space That’s all for now, if you liked the article, don’t forget to provide us with your valuable feedback in the comments to let us know, what you think is it a good idea to clear ram cache and buffer in production and Enterprise? Sharing is Caring…

What’s NonRootPortBinding? I just want to run my web server on port 443!

In the Unix world, privileged ports are 1-1024. As a non-root user, you’re not allowed to start a service and listen on them.

So, how do web servers work then? They usually use ports 80 and 443.

There are a few ways around this. The most common is that the process is started as root and then downgraded.

If you want to start a process without ever having root access though, the way to do it is with NonRootPortBinding. You can find information about it using Apache here.

Basically, for any process you want to start on a port under 1025, you can run:

setcap cap_net_bind_service=+ep <path to binary> 

Following that, you can confirm that you’ve set the correct permission by running:

getcap <path to binary> 

It should return with: cap_net_bind_service+ep

When you patch or update the binary, you will need to rerun the setcap command.

Hope this helps!

Using 2 ISPs at the same time, any routers!

I have another blog posting where I talk about how to use 2 ISPs at the same time and the router load balances the outbound connections.

Since then, I’ve upgraded my other Internet connection such that it’s not even worth keeping the other one. I now have one WAN link that’s over 500mbps and another that’s 20mbps. How do we load balance that? Why bother with the 20mbps? For that reason, I just unplugged it for months…

Then I thought about it and turned it back on. It was initially to be used as a backup, in case my primary goes down, which always does and comes back up again, but now, I use them both concurrently. I have an older router that was laying around, so it made my decision easy. If I didn’t have the extra router, I may not have gone out to buy another one.

Basically, the way I’m using it is like this:

192.168.0.1 is my primary router. It has my primary WAN link and all of my Internet traffic goes through it, with the exception of some DNS traffic.

Just a screenshot of my primary router settings. Why am I not using ASUS-Merlin? I don’t know. It didn’t support AImesh when I first set up the mesh. Merlin does now, but I keep thinking that I’ll be giving something up. Maybe I’ll do Merlin someday soon. I’ll be sure to blog about it if I do.

192.168.0.6 is my secondary router. This is where my hosts go to get Internet access if my primary goes down. Hopefully, the primary link doesn’t go down for an extended period of time. If it does, this is what I will use. I do need to manually configure my clients. Basically, just change it from automatic DHCP to manual and instead of using 192.168.0.1 as the default gateway, switch it to 192.168.0.6. I use the same DNS servers. I turn off DHCP on this router.

Very simple DD-WRT secondary router setup

DNS server – I have a separate caching DNS server that runs just to cache DNS requests. On it, I use forwarders to resolve DNS requests to avoid full recursive lookups if possible. To get to those forwarders, I put 1/2 of them through my primary ISP and 1/2 of them through the secondary.

My bind configuration options look like this:

forwarders {
    9.9.9.9;
    208.67.220.220;
    1.1.1.1;
    8.8.8.8;
    208.67.222.222;
};

My routing table looks like this:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 ens160
8.8.8.8 192.168.0.6 255.255.255.255 UGH 0 0 0 ens160
68.87.20.6 192.168.0.6 255.255.255.255 UGH 0 0 0 ens160
96.114.157.81 192.168.0.6 255.255.255.255 UGH 0 0 0 ens160
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
208.67.220.220 192.168.0.6 255.255.255.255 UGH 0 0 0 ens160

With this configuration, basically I’m just using 1 ISP for everything, with the small exception of DNS. When my primary ISP goes down, I don’t really think about DNS. Maybe I’ll check next time to see if DNS is still working. Usually when it goes down, I look at my router and see that my modem is likely rebooting.

Anyway, when I want to fail over to my 2nd ISP, I do it simply on the device like this:

When configuring manually, you’ll need to configure the DNS as well.

Hope this helps! Please leave any comments or questions below!

Don’t skip inutoc . !!! (AIX)

I’m not an AIX expert. I’ve only been on an AIX command line probably 3-4 hours at the most in my entire career.

I just know that when you’re installing AIX packages, make sure you run inutoc . I have some notes on AIX here.

Basically, after you copy an ipfl file into a directory, say /tmp, you will need to run the 2 commands like this:

inutoc .
installp -ac -gXY -d. ipfl

Otherwise the install will fail with some message that I can’t remember.

Rebroadcast your neighbor’s wifi for yourself (wifi extender) with Tomato firmware

My parents recently swapped Internet providers and since they didn’t know that it would take a week for the application to be completed, they were out of Internet service for about a week. The neighbors graciously allowed them to use theirs, but the signal didn’t reach the entire house. To make it reach, we configured the router to rebroadcast their wifi. If you’re going to be doing this, please make sure you get permission first!

The easiest way to do this is to just get one of those wifi extenders. We just didn’t happen to have any at the time. Since the router was Tomato compatible, I first flashed the router with tomato. The screenshots you’re seeing are Tomato by Shibby, just with a custom skin.

To do this, you first need to find out what IP address range you can use. I did this just by connecting a laptop to their wifi. Turned out that the IP address their DHCP server gave me was 192.168.7.x. I tried to ping 192.168.7.253 to make sure it wasn’t taken and sure enough, it wasn’t. I assigned 192.168.7.253 to my router.

Next, I needed to disable DHCP. You don’t want your DHCP competing with the neighbor’s. Lastly, use the default gateway that you get from their DHCP server. In my case, it was 192.168.7.1. You can use the DNS server from them also or you can use others. I like Quad9’s 9.9.9.9 or Cloudflare’s 1.1.1.1 or Google’s 8.8.8.8.

After that, you can match up your wifi settings with theirs’ so that it can connect. Use the exact same SSID, shared key, and use “Wireless Ethernet Bridge” for the Wireless Network Mode.

Lastly, optionally, you can put up any your own wifi settings as virtual wifi settings so that you don’t need to reconfigure any of your own devices.

The virtual setting is the wl0.1. Just add it and that’s it!

That’s all you need to do to make your own Tomato Wireless Extender. This has much better range than a regular wifi extender and was available at the time.

What in the hell is zypper!?

Had an issue recently with SUSE Linux where I saw this:

warning: /var/cache/zypper/RPMS/rpmname-9.3.0-6104.s12.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 7baa810c: NOKEY
Checking directory paths.

2019-11-16T10:29:00-05:00 Package installation failed

If you run into this, I would recommend:

  1. Run the command “zypper ref -s” to see if it would refresh the cache and service.
  2. Check if the quota exceeded on the server by reviewing the SUSE knowledge base article: https://www.suse.com/support/kb/doc/?id=7017376

Don’t travel without an adapter with USB ports!

If you’re ever going overseas, you’ll need an adapter converter for any electronics, probably your iphone/ipad/laptop. I would highly recommend a travel adapter converter that includes some USB charging ports. Most hotels nowadays will have these things, but most airports don’t and probably most airbnbs don’t either.

There are other kinds. The ones you want to avoid are the ones with many parts. I was embarrassed when I was in Singapore and had a travel adapter that didn’t fit! One end was literally made to fit in another and it didn’t fit! This one is one piece, so besides the cords, there’s not much that can break. Additionally, there’s a USB-C port. Many tablets and phones now are exclusively usb-c, so this one works great for that use case!

The adapter/charger I got is the Achoro one. I don’t know if there’s a named brand one of these, but this one has served me well for a few uses. I plug this one into the wall where I get a nightlight builtin and charge my iphone/ipad/watch/laptop overnight at the same time with it. https://usgiftgiant.com/achoro-4-usb-ports-travel-adapter/

For just $20, I would say this is a great deal. I would recommend putting one in the suitcase and one in the backpack just in case – these things can easily be lost and unless you’re in Asia, they’re pretty expensive to buy when you need them immediately.