How to Set Up DNS Blacklisting in a Lab Environment for Test

This is a very simple setup for those who have a lab environment where they do not want to be connected to the public Internet while doing the testing.

Some background:

The way dnsbl works is that when a connection is made to your mail server, it will take the client’s IP address, reverse it, append a domain onto it, and do a dns A or TXT record lookup for that name.

For example, if a spammer’s IP is 10.4.17.108, and you are using spam.list.com as your dnsbl site, your MTA will do a query for 108.17.4.10.spam.list.com. If the query returns positive, it means that the IP address is listed in the blackhole list and that mail should be rejected.

So the first thing you will need to do is set up a simple dns server. You can find out how to do that by consulting the DNS & Bind book or http://docs.sun.com/db/doc/816-7511 or various other sources.

Then, you need to set up a zone. Here's a sample:
 
# cat /var/named/spam.list.com
 
$TTL 86400 
@ 1D IN SOA @ root (
  42 ; serial
 3H ; refresh
 15M ; retry
 1W ; expiry
 1D ) ; minimum
NS localhost.
 A 10.4.16.11
108.17.4.10 IN A 127.0.0.2
108.17.4.10 IN TXT "10.4.17.108 is listed in spam.list.com"

With this in tact, all you need to do is set up your MTA to use spam.list.com for dnsbl calls.

ESX3 – remote console not coming thru – everything else ok

Problem: From the Virtual Infrastructure Client, I log in and can do whatever I want except see a VM’s console. The VM can power up, I can modify the VMs, but when I go to the console, it just gives me a black blank screen. When I use open console, I get a timeout. I set up the vmx file so that I could use vnc to connect to the console and it works fine. When using the webAccess, I can see the console just fine too. What gives?

In the VI3 server, connections are handled a little differently. Incoming RC connection go to port 902 in the COS: vmware-authd service Then, the MKS (mouse, keyboard, screen) connection happens on port 903 – vmware-vmkauthd listens on port 903. Since connections to port 903 are forwarded to COSShadow, COS would not see those packets. The client actually makes a request on port 902, but then, the server gives a redirect to the client to connect on port 903. If there’s any type of NAT in between or some other network tweak, it could cause this to fail.

Here’s the workaround:

1) Open up the /etc/vmware/config file and append to the bottom:

vmauthd.server.alwaysProxy = “TRUE”

2) Restart the management agents by running:

/etc/init.d/mgmt-vmware restart

3) Disconnect and reconnect the Virtual Infrastructure Client or VirtualCenter from the ESX server.

This will avoid the authd redirection and it should allow your remote console to function properly.

How to allow ssh into a machine w/o a password

First thing you need to do is give your public key to the server that you’ll be allowed into:

To generate the keys I run (on the client):

ssh-keygen -t dsa

Here’s the output:

Generating public/private dsa key pair.

Enter file in which to save the key (/home/alton/.ssh/id_dsa):

Created directory ‘/home/alton/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/alton/.ssh/id_dsa.

Your public key has been saved in /home/alton/.ssh/id_dsa.pub.

The key fingerprint is:

9b:e8:24:89:34:78:ab:fd:85:93:97:df:10:9c:c8:16 alton@streetfighter

Now you’ve got to move your public key to the other machine (the server). I just take a look at the contents of the file:

[alton@streetfighter ~]$ cat /home/alton/.ssh/id_dsa.pub

ssh-dss AAAAB3NzaC1kc3MAAACBASC1tnxqkSqdGswZkp1P0o6Xn93N9XB5VpenK4k/bV2g7ojywh83CAhjhM5gKlCJ2XI+gM+XD12t+M3+Gxlre79ltiH+L86A33trW6PmUR9cQOp9cLHEzC5bjfs3esqWuuFxC4ObpHbJXakNmSQlSsNzE8ZnGG6emHJUVkNyTcjXAAAAFQD1+ZD6LG2PlwPRHLYmVxGWOm3woQAAAIAHR88SB2dnnjE2a9fQAydR2pbMOrh4X7CISuCnGtPp0lFwAIm5WAMS2oQ8Mh9HG8ou9wjhPgGSlCe4jvZEZyqlOADTZLX8hzno8LMHMTBbNxKji3qGOzJZJBbdhwg9GwitWbrHrXfZS7p2+HAIvDKINleapgrXwrInEJ6zWPYDewAAAIAo4Nud2c06/3A9LqIGeB64Nvecb2MOBOaBSeiXHqOSE/c1a9gmTCb5im3qIcrWs+cgUXbCxyHWaYQYTWUWsLh79TaLIe/rXOMr58aXQ34k1rcXaOgw3nb45YxcEIbAcqO85zc+clpfhTVzM8Aqkh5hv1b9/1pBkz75oC5KsYYkrPQ== alton@streetfighter

Then I go to the other machine and copy the contents into ~/ssh/authorizedkeys

So in the file, I add a new line if it exists and:

ssh-dss AAAAB3NzaC1kc3MAAACBASC1tnxqkSqdGswZkp1P0o6Xn93N9XB5VpenK4k/bV2g7ojywh83CAhjhM5gKlCJ2XI+gM+XD12t+M3+Gxlre79ltiH+L86A33trW6PmUR9cQOp9cLHEzC5bjfs3esqWuuFxC4ObpHbJXakNmSQlSsNzE8ZnGG6emHJUVkNyTcjXAAAAFQD1+ZD6LG2PlwPRHLYmVxGWOm3woQAAAIAHR88SB2dnnjE2a9fQAydR2pbMOrh4X7CISuCnGtPp0lFwAIm5WAMS2oQ8Mh9HG8ou9wjhPgGSlCe4jvZEZyqlOADTZLX8hzno8LMHMTBbNxKji3qGOzJZJBbdhwg9GwitWbrHrXfZS7p2+HAIvDKINleapgrXwrInEJ6zWPYDewAAAIAo4Nud2c06/3A9LqIGeB64Nvecb2MOBOaBSeiXHqOSE/c1a9gmTCb5im3qIcrWs+cgUXbCxyHWaYQYTWUWsLh79TaLIe/rXOMr58aXQ34k1rcXaOgw3nb45YxcEIbAcqO85zc+clpfhTVzM8Aqkh5hv1b9/1pBkz75oC5KsYYkrPQ== alton@streetfighter

That’s all.

license problem…not enough licenses, but 0 of 6 are used

LMtools seems to check out the licenses according to their logs, but they check back in immediately. Interesting huh? We tried changing the file different ways – it was so weird. Finally, we had someone in support use the license checker tool that they have and they found that we mixed up the hosted licenses w/ the server based licenses – doh! … so we just separated them and created new files and it was all set.

It turns out that the website when generating licenses can generate host based licenses instead of server based licenses. You can tell the difference by seeing:

VENDOR_STRING=licenseType=Host

opposed to:

VENDOR_STRING=licenseType=Server

VMotion failed … everything looks right … why?

Obviously, if everything’s right, it won’t fail, but here, it’s more a bug than anything else. In this case, if you use dedicated NICs just for VMotion and you’ve just a crossover cable between the 2 nics, you may have a problem if you’re using the same network IP addresses for the Vmotion IPs. It seems to want to look for the router or something. You’ll see that it logs in both boxes, so you may want to rule out the networking issue, but you can’t in this case. The logs on both sides will indicate timeout. In this case, both of the servers are on the 10.x.x.x/24 network and the VMotion and iSCSI networks have ip addresses that were both 10.x.x.x/24, but the VMotion nics can’t access the router. The fix is easy. Just change the IPs to something that the server doesn’t know about … something like 192.168.0.1/24 or something.