How to test open ports w/o telnet or nc

Found this out of necessity when a security team didn’t allow the installation of either telnet or nc. I initially thought ssh would work, but it doesn’t really work.

The command is simple. Just do this:

/dev/tcp/<host>/<port>

Replace the <host> and <port>. Here’s how it would look if successful:

SV-LT-1361:~ altonyu$ > /dev/tcp/192.168.0.11/2049
SV-LT-1361:~ altonyu$ echo $?
0

Here’s how it would look if unsuccessful:

SV-LT-1361:~ altonyu$ > /dev/tcp/192.168.0.11/2047
-bash: connect: Connection refused
-bash: /dev/tcp/192.168.0.11/2047: Connection refused
SV-LT-1361:~ altonyu$ echo $?
1

Obviously if the command hangs, it probably means it won’t work either.

Hope this helps someone!

Use update-ca-trust! Or update-ca-certificates.

Don’t just append the /etc/ssl/certs/ca-certificates.crt or the /etc/ssl/certs/ca-bundle.crt.

Not long ago, I thought that it didn’t matter. I figured since the update-ca-trust command just updated the bundle, I might as well skip a step and go directly. I was wrong. Don’t do it. I guess that’s why people actually have processes and directions to follow.

The files are not meant to be edited manually. They are generated by the update-ca-trust or update-ca-certificates commands, which scan the /etc/pki/ca-trust/source/anchors or /usr/local/share/ca-certificates directories for custom CAs, and then concatenate them with the system CAs into a single file. If you edit these files directly, your changes will be overwritten the next time these commands are run. This could mean that your changes would not survive a patching if the ca-certificates package is updated.

The reason why you want to put the certificate issuers in /etc/pki/ca-trust/source/anchors/ or /etc/ssl/certs/ and use the update-ca-trust enable/extract commands is so that it can survive an update. If someone decides to patch the machine and there are other certificates being updated, the one that you appended to the bundle will get deleted.

Follow the process! For me, that’s basically:

1. Copy your custom CA file (in PEM format) to the /etc/pki/ca-trust/source/anchors directory on Red Hat-based systems, or the /usr/local/share/ca-certificates directory on Debian-based systems. Make sure the file has a .crt extension.
2. Run the update-ca-trust or update-ca-certificates command as root. This will regenerate the /etc/ssl/certs/ca-certificates.crt or the /etc/ssl/certs/ca-bundle.crt file with your custom CA included.
3. Restart any services or applications that use SSL/TLS connections, such as web servers, browsers, curl, etc. They should now trust your custom CA.

How to clear the filesystem buffer cache in Linux

You would probably want to do this if you’re suspecting that there’s an issue where you’re falling short on memory, diagnosing performance issues, etc. It’s much faster than a reboot and if it solves your problem, you’ll know what to do in the future. It’s not likely that you’ll need to do this. Just good to know how.

If you’re interested in more information regarding your memory, you can go look at: https://www.linuxatemyram.com/

Taken from: https://stackoverflow.com/questions/29870068/what-are-pagecache-dentries-inodes

With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.

It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries–if the directory is there–list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.

The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables–anything that the OS could hold in memory from a file.

The commands flush these buffers.

Every Linux System has three options to clear cache without interrupting any processes or services.

  1. Clear PageCache only.
    • # sync; echo 1 > /proc/sys/vm/drop_caches
  2. Clear dentries and inodes.
    • # sync; echo 2 > /proc/sys/vm/drop_caches
  3. Clear PageCache, dentries and inodes.
    • # sync; echo 3 > /proc/sys/vm/drop_caches

Explanation of above commands. sync will flush the file system buffer. Commands separated by “;” run sequentially. The shell wait for each command to terminate before executing the next command in the sequence. As mentioned in kernel documentation, writing to drop_cache will clean cache without killing any application/service, command echo is doing the job of writing to file. If you have to clear the disk cache, the first command is safest in enterprise and production as “…echo 1 > ….” will clear the PageCache only. It is not recommended to use third option above “…echo 3 >” in production until you know what you are doing, as it will clear PageCache, dentries and inodes. Is it a good idea to free Buffer and Cache in Linux that might be used by Linux Kernel? When you are applying various settings and want to check, if it is actually implemented specially on I/O-extensive benchmark, then you may need to clear buffer cache. You can drop cache as explained above without rebooting the System i.e., no downtime required. Linux is designed in such a way that it looks into disk cache before looking onto the disk. If it finds the resource in the cache, then the request doesn’t reach the disk. If we clean the cache, the disk cache will be less useful as the OS will look for the resource on the disk. Moreover it will also slow the system for a few seconds while the cache is cleaned and every resource required by OS is loaded again in the disk-cache. Now we will be creating a shell script to auto clear RAM cache daily at 2am via a cron scheduler task. Create a shell script clearcache.sh and add the following lines. #!/bin/bash # Note, we are using “echo 3”, but it is not recommended in production instead use “echo 1” echo “echo 3 > /proc/sys/vm/drop_caches” Set execute permission on the clearcache.sh file. # chmod 755 clearcache.sh Now you may call the script whenever you required to clear ram cache. Now set a cron to clear RAM cache everyday at 2am. Open crontab for editing. # crontab -e Append the below line, save and exit to run it at 2am daily. 0 2 * * * /path/to/clearcache.sh For more details on how to cron a job you may like to check our article on 11 Cron Scheduling Jobs. Is it good idea to auto clear RAM cache on production server? No! it is not. Think of a situation when you have scheduled the script to clear ram cache everyday at 2am. Everyday at 2am the script is executed and it flushes your RAM cache. One day for whatsoever reason, may be more than expected users are online on your website and seeking resource from your server. At the same time scheduled script run and clears everything in cache. Now all the user are fetching data from disk. It will result in server crash and corrupt the database. So clear ram-cache only when required,and known your foot steps, else you are a Cargo Cult System Administrator. How to Clear Swap Space in Linux? If you want to clear Swap space, you may like to run the below command. # swapoff -a && swapon -a Also you may add above command to a cron script above, after understanding all the associated risk. Now we will be combining both above commands into one single command to make a proper script to clear RAM Cache and Swap Space. # echo 3 > /proc/sys/vm/drop_caches && swapoff -a && swapon -a && printf ‘\n%s\n’ ‘Ram-cache and Swap Cleared’ OR $ su -c “echo 3 >’/proc/sys/vm/drop_caches’ && swapoff -a && swapon -a && printf ‘\n%s\n’ ‘Ram-cache and Swap Cleared'” root After testing both above command, we will run command “free -h” before and after running the script and will check cache. Clear RAM Cache and Swap Space That’s all for now, if you liked the article, don’t forget to provide us with your valuable feedback in the comments to let us know, what you think is it a good idea to clear ram cache and buffer in production and Enterprise? Sharing is Caring…

What’s NonRootPortBinding? I just want to run my web server on port 443!

In the Unix world, privileged ports are 1-1024. As a non-root user, you’re not allowed to start a service and listen on them.

So, how do web servers work then? They usually use ports 80 and 443.

There are a few ways around this. The most common is that the process is started as root and then downgraded.

If you want to start a process without ever having root access though, the way to do it is with NonRootPortBinding. You can find information about it using Apache here.

Basically, for any process you want to start on a port under 1025, you can run:

setcap cap_net_bind_service=+ep <path to binary> 

Following that, you can confirm that you’ve set the correct permission by running:

getcap <path to binary> 

It should return with: cap_net_bind_service+ep

When you patch or update the binary, you will need to rerun the setcap command.

Hope this helps!

Don’t skip inutoc . !!! (AIX)

I’m not an AIX expert. I’ve only been on an AIX command line probably 3-4 hours at the most in my entire career.

I just know that when you’re installing AIX packages, make sure you run inutoc . I have some notes on AIX here.

Basically, after you copy an ipfl file into a directory, say /tmp, you will need to run the 2 commands like this:

inutoc .
installp -ac -gXY -d. ipfl

Otherwise the install will fail with some message that I can’t remember.

What in the hell is zypper!?

Had an issue recently with SUSE Linux where I saw this:

warning: /var/cache/zypper/RPMS/rpmname-9.3.0-6104.s12.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 7baa810c: NOKEY
Checking directory paths.

2019-11-16T10:29:00-05:00 Package installation failed

If you run into this, I would recommend:

  1. Run the command “zypper ref -s” to see if it would refresh the cache and service.
  2. Check if the quota exceeded on the server by reviewing the SUSE knowledge base article: https://www.suse.com/support/kb/doc/?id=7017376

WordPress is under attack! Watch it! Password Protect it!

What? What do you mean? There’s already a password. Yes, you need to log in when you want to put up a new blog post or do maintenance of some sort. However, that doesn’t mean that you can’t have an additional layer of protection. Not only can you have it, WordPress actually recommends it here: https://codex.wordpress.org/Brute_Force_Attacks

I looked in my nginx access log and I saw a bunch of messages that looked like this:

95.219.148.136 - - [16/Nov/2017:06:34:33 -0800] "GET /wp-login.php HTTP/1.1" 402 195 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
95.219.148.136 - - [16/Nov/2017:06:34:34 -0800] "GET / HTTP/1.1" 200 21587 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
202.152.71.21 - - [16/Nov/2017:06:40:48 -0800] "GET /wp-login.php HTTP/1.1" 402 195 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
202.152.71.21 - - [16/Nov/2017:06:40:49 -0800] "GET / HTTP/1.1" 200 21589 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
177.221.4.36 - - [16/Nov/2017:06:55:42 -0800] "GET /wp-login.php HTTP/1.1" 402 195 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"
177.221.4.36 - - [16/Nov/2017:06:55:42 -0800] "GET / HTTP/1.1" 200 21589 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1"

After doing some investigation, it appeard to be the sathurbot attacking my blogsite. It’s some sort of distributed piece of malware that attacks poorly maintained or blogs with weak passwords. The malware tries to attack the wp-login and something else. You can read more about it here: https://www.welivesecurity.com/2017/04/06/sathurbot-distributed-wordpress-password-attack/.

The first thing I did to counter this issue was configure Cloudflare to under attack mode. This gives the client a short delay when connecting to your site so that can’t get to the file. This should stop the entries in the log completely, immediately. Since I don’t want users to see the delay all of the time, I decided after the attacks slowed to have nginx password protect the file so that when trying to request it, nginx will ask for a password as well. This way, you’ll need to authenticate twice to get into WordPress, but it’s okay. The extra trouble gives me peace of mind that I’ll less likely be attacked.

With nginx, I did it this way:

location ^~ /wp-login.php {
 auth_basic "Administrator Login";
 auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
 include fastcgi.conf;
 fastcgi_intercept_errors on;
 fastcgi_pass php-wphandler;
 fastcgi_buffers 16 16k;
 fastcgi_buffer_size 32k;
}

The .htpasswd is a hashed file. You can create it with the htpasswd command that comes with the apache2-utils package. The file would look something like this:

alton:$@AFSADF$SDFapr1$yDoxiXVW$aFe

Now in my logs, I get 401 messages instead of 402 messages.

172.68.242.50 - - [29/Nov/2017:09:36:50 -0800] "GET /wp-login.php HTTP/1.1" 401 195 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1" "134.196.23.66"
172.68.246.96 - - [29/Nov/2017:09:45:48 -0800] "GET /wp-login.php HTTP/1.1" 401 195 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1" "193.93.187.11"
162.158.91.51 - - [29/Nov/2017:09:49:22 -0800] "GET /wp-login.php HTTP/1.1" 401 195 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1" "93.172.55.76"
141.101.77.120 - - [29/Nov/2017:10:08:03 -0800] "GET /wp-login.php HTTP/1.1" 401 195 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1" "41.100.125.248"

I also know that they’re less likely to hack my site. 🙂

Happy blogging!

Using vim-cmd to remedy a bsod

Here’s a great tutorial for vim-cmd if you haven’t had experience with it before by my friend, Steve Jinwww.doublecloud.org/2013/11/vmware-esxi-vim-cmd-command-a-quick-tutorial/

This is a real-world situation I got myself into when I tried connecting to my client VM and found a BSOD that looked like this:

It’s pretty obvious that the reason for the crash is the USB stick that’s plugged in from the usbuhci.sys line in the blue screen. Since I tunnel into my client VM via SSH and VNC, the easiest way for me to shutdown my VM and remedy this issue is through vim-cmd. This only works if you have SSH allowed onto your ESXi host or if you are connecting to the host with the VMware CLI or vMA or whatever they’re calling it these days. I have the former.

The first thing I do after logging into the ESXi host as root is run:

vim-cmd vmsvc/getallvms

I need to know which one of my VMs is the one to manage. I get this:

Vmid Name File Guest OS Version Annotation
1 windows7 [BIG_DISK] windows7/windows7.vmx windows7_64Guest vmx-07
3 thimble [BIG_DISK] thimble/thimble.vmx ubuntu64Guest vmx-08
4 chunli [Datastore 2] chunli2/chunli2.vmx ubuntu64Guest vmx-08
5 zangief [Datastore 2] zangief2/zangief2.vmx ubuntu64Guest vmx-11

With this information, I know that it’s VM 1, so I power it off by running:

vim-cmd vmsvc/power.off 1

Thinking the USB issue might be a fluke, I try to power the VM back on to see if it will boot.

vim-cmd vmsvc/power.on 1

I see that it starts booting, but as the resolution changes on the VM, my VNC viewer freezes. Since I normally don’t know exactly when it freezes, I didn’t know when I got the BSOD again.

Until I decided to at look at the vmware.log file. This is what I saw there:

2017-10-18T22:27:05.519Z| svga| I125: SVGA disabling SVGA
2017-10-18T22:27:05.545Z| svga| W115: WinBSOD: (20) 'Technical information: '
2017-10-18T22:27:05.545Z| svga| W115:
2017-10-18T22:27:05.546Z| svga| W115: WinBSOD: (22) '*** STOP: 0x000000D1 (0xFFFFF88000BF2000,0x0000000000000002,0x0000000000000001,0'
2017-10-18T22:27:05.546Z| svga| W115:
2017-10-18T22:27:05.546Z| svga| W115: WinBSOD: (23) 'xFFFFF88004206E49) '
2017-10-18T22:27:05.546Z| svga| W115:
2017-10-18T22:27:05.557Z| svga| W115: WinBSOD: (26) '*** usbuhci.sys - Address FFFFF88004206E49 base at FFFFF88004200000, DateStamp'
2017-10-18T22:27:05.557Z| svga| W115:
2017-10-18T22:27:05.557Z| svga| W115: WinBSOD: (27) ' 57b37a29 '
2017-10-18T22:27:05.557Z| svga| W115:
2017-10-18T22:27:05.557Z| svga| W115: WinBSOD: (30) 'Collecting data for crash dump ... '
2017-10-18T22:27:05.557Z| svga| W115:
2017-10-18T22:27:05.573Z| svga| W115: WinBSOD: (31) 'Initializing disk for crash dump ... '
2017-10-18T22:27:05.573Z| svga| W115:
2017-10-18T22:27:07.547Z| mks| W115: Guest operating system crash detected.

Okay, so I see that my hunch is correct. I guess it’s time I remove the USB device from the VM. So I power off the VM again and open up the vmx file and just start removing all instances of USB.

These are the lines I removed. Don’t worry about breaking anything. The hypervisor will put them back if you need them later. Back up your vmx file before doing it though just in case.

usb.pciSlotNumber = "34"
usb.present = "TRUE"
usb:1.speed = "2"
usb:1.present = "TRUE"
usb:1.deviceType = "hub"
usb:1.port = "1"
usb:1.parent = "-1"
usb.autoConnect.device0 = "path:1/1 autoclean:1"
usb:0.present = "TRUE"
usb:0.deviceType = "mouse"
usb:0.port = "0"
usb:0.parent = "-1"

After you’ve saved your changes, you’ll need to reload the changes so that ESXi will reread the .vmx file to remove the USB device. You can do this by running this command:

vim-cmd vmsvc/reload 1

Now you’re ready to power on the VM.

vim-cmd vmsvc/power.on 1

The VM powers up and I’m back in business. I just had to figure out the USB issue later. Turned out that I just needed to reconnect the device and reformat it. I haven’t seen the issue come up again.

 

Red Hat Enterprise Linux 7.3 is broken!

At least kernel-3.10.0-514.26.2.el7.x86_64.rpm is broken. With it, you will not be able to use a stack size lower than ~4.5MB.

Here’s some reading on why your applications would want to do this: https://www.systemcodegeeks.com/shell-scripting/bash/using-rlimit-and-why-you-should/

Here’s an excerpt:

Why do we care?

Security in depth.

First, people make mistakes. Setting reasonable limits keeps a runaway process from taking down the system.

Second, attackers will take advantage of any opportunity they can find. A buffer overflow isn’t an abstract concern – they are real and often allow an attacker to execute arbitrary code. Reasonable limits may be enough to sharply curtail the damage caused by an exploit.

Here are some concrete examples:

First, setting RLIMIT_NPROC to zero means that the process cannot fork/exec a new process – an attacker cannot execute arbitrary code as the current user. (Note: the man pages suggests this may limit the total number of processes for the user, not just in this process and its children. This should be double-checked.) It also prevents a more subtle attack where a process is repeatedly forked until a desired PID is acquired. PIDs should be unique but apparently some kernels now support a larger PID space than the traditional pid_t. That means legacy system calls may be ambiguous.

Second, setting RLIMIT_ASRLIMIT_DATA, and RLIMIT_MEMLOCK to reasonable values prevents a process from forcing the system to thrash by limiting available memory.

Third, setting RLIMIT_CORE to a reasonable value (or disabling core dumps entirely) has historically been used to prevent denial of service attacks by filling the disk with core dumps. Today core dumps are often disabled to ensure sensitive information such as encryption keys are not inadvertently written to disk where an attacker can later retrieve them. Sensitive information should also be memlock()ed to prevent it from being written to the swap disk.

You can try running the following commands:

ulimit -s 4096
/bin/true

and see this output:

-bash: /bin/true: Argument list too long

Really!? Find more at Red Hat Bug 1463241 – rlimit_stack problems after update.

If you’re using this kernel, I suggest you upgrade immediately. Your applications that might be written with these limits set wil fail.

 

Automated backup of AWS route53 zones

cli53! It’s the coolest tool you can use for Amazon DNS route53! This is the posting I had tried to follow for backing up my zone files.

https://sysinfo.io/automated-backup-aws-route-53-record-sets/

I suspect that AWS changed the output of this command, so it no longer works. Here’s one that does:

cli53 list | awk '{print $2}' | grep -v Name | while read line; do cli53 export ${line} > ~/backup/${line}bk; done

With this command, it will grab all of the domains and back up each of the zone files.