Archive for 2007

Creating a Process Table hash in Perl

Friday, December 28th, 2007

I came across a situation where I needed to access the process table in Perl. The problem that i found was that the best accessor Proc::ProcessTable only retrieved an array. Since it seems fairly senseless to keep looping over an array to find the exact process id that I want, you may want to turn it into a hash.

use strict;
use warnings;
use Proc::ProcessTable;

 # Create a new process table object
 my ($pt) = new Proc::ProcessTable;

 # Initialize your process table hash
 my (%pt_hash);

 # Get the fields that your architecture supports
 my (@fields) = $pt->fields;

 # Outer loop for each process id
 foreach my $proc ( @{$pt->table} ) {
    # Inner loop for each field within the process id
    for my $field (@fields) {
       # Add the field to the hash
       $pt_hash{$proc->pid}{$field} = $proc->$field();
    }
 }

It’s just as simple as that. If you want to be sure that its in there. At the end of the file add these two lines for proof:

use Data::Dumper;
print Dumper \%pt_hash;

The hash is organized with the keys being the process ids. There is another hash underneath it with all the fields as hash keys.

Deleting Lots Of Files (SysAdmin problem solving exercise)

Monday, December 17th, 2007

Since I know I am not the first (or the last) to make a typo in logrotate and not catch it for a while…someone else must have been in the position of having to delete a lot of files in the same manner. I recently learned that, as usual, there is more than one way to handle it.

To put the situation in context, I basically allowed thousands of mail.* files to be created. These files littered the /var/log/ directory and basically slowed down the entire file system access. I figured out this a number of ways.

The first way was when I tried to do an ls anywhere, it would just hang. My first reaction was to check to see what was eating up the CPU. To do this, I did a top. I noticed that logrotate was hogging all the CPU cycles. Since I know that logrotate basically only operates on one parent directory (by default) /var/log, I headed on over there and did an ls. Once again, it just hung. Then I figured the file system was slow and decided to check out some file system information. The next two commands I ran were df -h and df -i. I ran the df -h to see if we were out of disk space (and yes I lazily use human readable format). I ran the second to check to see how many inodes were in use. (For more information on inodes, check out the wikipedia entry here).

Now that I know the system is short on inodes, I checked out the output of lsof. Now I know that we have some serious problems in the /var/log dir. After some quick investigation, I realized that there were too many mail.* files. How do I get rid of them? Glad you asked… Let’s assume that we want to delete ALL the mail.* files in the /var/log directory.

1) The easiest way is to do it with find:
1a) Using find‘s delete command:

1
[root@eric] /var/log # find ./ -type f -name "mail.*" -delete

or
1b) using find‘s exec command with rm:

1
[root@eric] /var/log # find ./ -type f -name "mail.*" -exec rm -rf '{}' \;

These will work, but either will be slow since they doesn’t do batch execution.

2) A slightly more preferred way is to use bash:

1
[root@eric] /var/log # for n in mail.*; do rm -v $n; done;

This is a little faster, but will still be relatively slow since there is no batch execution. (Note: The -v in the rm will cause quite a bit of output since it is showing you EVERY file it deletes. Feel free to leave this out if you really screwed up.)

3) The actual preferred method is to use find:

1
[root@eric] /var/log # find ./ -type f -name "mail.*" | xargs rm -f

I believe this is the preferred method because although it removes the files one at a time, it is more efficient for the file system since it batches it up.

There are certainly other ways to accomplish this task. It can always be done with a Perl one-liner or even using some Perl modules to save some time. These are just a few ideas to point someone in the right direction.

MySQL Proxy Query Profiling

Friday, December 7th, 2007

Since I am now finally getting to play with MySQL Proxy, I am going to outline some recipes here that I have found/created/modified that may be useful to someone other than me. This is a recipe for profiling queries. It writes the information to the PROXY_LOG_FILE currently name mysql.log. It is a file that will be created in the directory that you run mysql-proxy from. The file itself is mostly commented and should therefore be pretty self-explanatory. It was adapted from the reference documentation listed at the bottom of this entry.

assert(proxy.PROXY_VERSION >= 0x00600,
  "You need to be running mysql-proxy 0.6.0 or greater")

 -- Set up the log file
 local log_file = os.getenv("PROXY_LOG_FILE")
 if (log_file == nil) then
    log_file = "mysql.log"
 end

 -- Open up our log file in append mode
 local fh = io.open(log_file, "a+")

 -- Set some variables
 local original_query = ""
 local executed_query = ""
 local replace = false
 local comment = ""

 -- For profilign
 local profile = 0

-- Push the query onto the Queue
function read_query( packet )
  if string.byte(packet) == proxy.COM_QUERY then
    query = string.sub(packet, 2)

    -- Pull out the comment and store it
    original_query = string.gsub(query, "%s*%*%*%*%s(.+)%s%*%*%*%s*",'')
    comment = string.match(query, "%s*%*%*%*%s(.+)%s%*%*%*%s*")

    -- Add the original packet to the query if we have a comment
    if (comment) then
        if string.match(string.upper(comment), '%s*PROFILE') then
          -- Profile types:
          --   ALL, BLOCK IO, CONTEXT SWITCHES, IPC,
          --        MEMORY, PAGE FAULTS, SOURCE, SWAPS

          -- Set it up for profiling
          -- Generate a new COM_QUERY packet
          --   and inject it with a new id (11)
          original_query = "SET PROFILING=1; " .. original_query
      end -- End if (PROFILE)
    end -- End if (comment)

    executed_query = original_query
    -- Check for a 'SELECT' typo
    if string.match(string.upper(original_query), '^%s*SLECT') then
        executed_query = string.gsub(original_query,'^%s*%w+', 'SELECT')
        replace = true;
    -- matches "CD" as first word of the query
    elseif string.match(string.upper(original_query), '^%s*CD') then
        executed_query = string.gsub(original_query,'^%s*%w+', 'USE')
        replace = true
    end

    -- Postpend the other profiling strings to the query
    if (comment and string.match(string.upper(comment), '%s*PROFILE')) then
        executed_query = executed_query .. "; SHOW " .. comment
    end

    -- Actually execute the query here
    proxy.queries:append(1, string.char(proxy.COM_QUERY) .. executed_query )
    return proxy.PROXY_SEND_QUERY
  else
    executed_query = ""
  end
end

function read_query_result (inj)
  local res = assert(inj.resultset)
  local num_cols = string.byte(res.raw, 1)
  if num_cols > 0 and num_cols < 255 then
     for row in inj.resultset.rows do
         row_count = row_count + 1
     end
  end
  
  -- Prepend the error tag in the log
  local error_status = "" 
  if res.query_status and (res.query_status < 0) then
     error_status = "[ERROR]"
  end
  
  -- Gets the rows affected by the actual query
  local row_count = 0
  if (res.affected_rows) then
     row_count = res.affected_rows
  end
  
  -- Prepend the comment line in the log
  if (comment) then
     fh:write( string.format("%s %6d -- [COMMENT] %s\n",
        os.date('%Y-%m-%d %H:%M:%S'), 
        proxy.connection.server["thread_id"],
        comment))
  end
  
  -- Prepend the typo in the log
  if (replace) then
     fh:write( string.format("%s %6d -- [REPLACEMENT] %s\n\t\t\t%s\n",
        os.date('%Y-%m-%d %H:%M:%S'), 
        proxy.connection.server["thread_id"],
        ("replaced " .. original_query),
        ("with " .. executed_query)))
  end
  
  -- Write the query adding the number of rows retrieved and query time
  fh:write( string.format("%s %6d -- %s %s {%d rows} {%d ms}\n",
     os.date('%Y-%m-%d %H:%M:%S'), 
     proxy.connection.server["thread_id"],
     error_status,
     executed_query,
     row_count,
     inj.query_time)
  )
  fh:flush()
end

To make this work, simply append 3 asterisks 'PROFILE <profile_type>' and then 3 more asterisks and you will have the profile information returned to you in your query:

1
*** PROFILE ALL *** SELECT * FROM foo_bar;

2 tables will be returned. Your results and then the profile of your results.

Reference: Writing LUA Scripts for MySQL Proxy

Text Messages to Cell Phones via Email

Monday, December 3rd, 2007

I have been compiling a list of the domains that one needs in order to send text messages to cell phones via email. As a huge user of Nagios, this is how I keep myself aware of the status changes. Below I have listed the carriers that I use most frequently. If you have any others to list here to make this more complete, please add a comment and I will add it to the list.

I have seen other instances of this before, but some are outdated. These are the newest ones that I have come across.

The assumption here is that the telephone number of the person that you are trying to text message is 2225551212. Just make sure that there is nothing in between the numbers (like a ‘.’ or a ‘-‘), Also make sure that you don’t put the ‘1’ before the phone number.

  • ATT: 2225551212@txt.att.net
  • Verizon: 2225551212@vtext.com
  • T-Mobile: 2225551212@tmomail.net
  • Alltell: 2225551212@message.alltel.com
  • Virgin Mobile:2225551212@vmobl.com
  • Sprint: 2225551212@messaging.sprintpcs.com
  • Nextel: 2225551212@messaging.nextel.com
  • All Other: 2225551212@teleflip.com

It should be noted that the last item (Teleflip), can be used either in place of any of these or as a fall through. It seems to act as a universal text message system.

UPDATES:
Here are the contributed addresses. The thanks are in parentheses following the numbers:

  • Claro (Brazil): 2225551212@clarotorpedo.com.br (Rodrigo)

Cloning a Virtual Machine in VMWare VI3 without Virtual Server

Monday, November 5th, 2007

I, like many other people working in a small company, have to fix problems and come up with solutions with cost at the forefront. I had to make many virtual machines appear from nowhere to create an environment in virtually no time at all. Since all I had was VMWare Server (for Linux), I started there. When I realized that those didn’t translate to ESX, I had to come up with another solution. I created a single template guest OS (of Gentoo 2006.1 which is our primary server OS here) and decided to clone that. How did I do it…well, I am glad you asked.

The key here was to figure out what the VI3 (Virtual Infrastructure 3) client did and mimic it. In order to figure this out, I copied the entire /etc directory to a place where I could later diff it. I created 3 VM (virtual machines) with nothing on them to discern the patterns that the client made in its files. I then diff’d the 2 version of the /etc directory and now I knew the main changes that had to be made. It also should be noted that the Temple VM should be powered off before creating the Clone VM.

I also kept a pristine copy of the template VM so I would always have something to copy from when creating a new VM. For the sake of argument, let’s go with the following names and terminology so we can all stay on the same page. The template VM is going to be named Template. The cloned VM is going to be named Clone. I am going to assume that the template VM that you are using is already fully created, configured, and installed. I am also assuming that you either have console or SSH access to the host since you will need to have access to the commands on the computer itself.

The first step is to copy the template directory. My volume is named Array1, so the command looks like this (Note: I add the & to put the command in the background since it takes a while):

[root@vm1 ~]# cp -arp /vmfs/volumes/Array1/Template /vmfs/volumes/Array1/Clone &

Now its time to get started on the file editing. The first group of files we have to mess with are in the /etc/vmware/hostd/.

vmInventory.xml:
Assuming the only virtual machines you have are going to be Template and his buddy Clone, the following is what your vmInventory.xml should look like:

<ConfigRoot>
  <ConfigEntry id="0001">
    <objID>32</objID>
    <vmxCfgPath>/vmfs/volumes/4725ae82-4e276b80-4c76-001c23c38d80/Template/Template.vmx</vmxCfgPath>
  </ConfigEntry>
  <ConfigEntry id="0002">
    <objID>48</objID>
    <vmxCfgPath>/vmfs/volumes/4725ae82-4e276b80-4c76-001c23c38d80/Clone/Clone.vmx</vmxCfgPath>
  </ConfigEntry>
</ConfigRoot>

The 3 items that you have to note here are:

  1. id: This is a 4 digit zero-padded number going up in increments of 1
  2. objID: This is a number going up in increments of 16
  3. vmxCfgPath: Here you need to ensure that you have the proper hard path (not sym-linked)

pools.xml:
Using the same assumption as before, the only 2 VMs are Template and Clone

<ConfigRoot>
  <resourcePool id="0000">
    <name>Resources</name>
    <objID>ha-root-pool</objID>
    <path>host/user</path>
  </resourcePool>
  <vm id="0001">
    <lastModified>2007-10-30T16:23:57.618151Z</lastModified>
    <objID>32</objID>
    <resourcePool>ha-root-pool</resourcePool>
    <shares>
      <cpu>normal</cpu>
      <mem>normal</mem>
    </shares>
  </vm>
  <vm id="0002">
    <lastModified>2007-10-30T16:23:57.618151Z</lastModified>
    <objID>48</objID>
    <resourcePool>ha-root-pool</resourcePool>
    <shares>
      <cpu>normal</cpu>
      <mem>normal</mem>
    </shares>
  </vm>
</ConfigRoot>

The 3 items that you have to note here are:

  1. id: This is a 4 digit zero-padded number going up in increments of 1 (and it must match the id from vmInventory.xml
  2. objID: This is a number going up in increments of 16 (and it must match the id from vmInventory.xml
  3. The lastModified item here doesn’t matter as it will be changed when you make a change to VM anyway.

By now, the Template directory should be finished copying itself over to the directory that we will be using as our clone. First thing we have to do is rename all the files in the directory to mimic the name of our VM.

  # mv Template-flat.vmdk Clone-flat.vmdk
  # mv Template.nvram Clone.nvram
  # mv Template.vmdk Clone.vmdk
  # mv Template.vmx Clone.vmx
  # mv Template.vmxf Clone.vmxf

Now we just need to edit some files and we are ready to go. First let’s edit the Template.vmdk file. You need to change the line that reads something similar to (the difference will be in the size of your extents):

# Extent description
RW 20971520 VMFS "Template-flat.vmdk"

to look like:

# Extent description
RW 20971520 VMFS "Clone-flat.vmdk"

Save and exit this file. The next file is Template.vmx. The key here is to change every instance of the word Template to Clone. There should be 4 instances:

  1. nvram
  2. displayName
  3. extendedConfigFile
  4. scsi0:0.fileName

Don’t forget to change the MAC address(es). Their variable name(s) should be something like ethernet0.generateAddress. Delete the line that has the variable title sched.swap.derivedName. It will be regenerated and added to the config file. Lastly, add the following line to the end of the file if it doesn’t already exist elsewhere in the file:

uuid.action = "create"

The final item that needs to be done is the one that got me for such a long time. This is the step that will allow your changes to be seen in the client. (Drum roll …..)

Restart the VMWare management console daemon. So simple. Just run the following command:

  # /etc/init.d/mgmt-vmware restart

Note: This will log you out of the client console. But when you log back in, you will have access to the changes that you have made including the clones.

Good luck, and be careful as XML has a tendency to be easier to break than to fix.

Asterisk Caller ID Blocking Recipe

Tuesday, September 25th, 2007

Here’s another quick little Asterisk recipe that I threw together. It’s a handy because it only takes about 10 minutes to setup and is infinitely useful to the sales types. Just a note, this was done with Asterisk 1.4.8.

I wanted to do a little in AEL just to get a feel for it. It is a little AEL and a little regular extensions.conf type stuff.

The basic way that this CallerID Blocking recipe works is to use the Asterisk DB. An entry with the family of CIDBlock and the key of the dialing users extension will have a value of 1 or 0. The user initially sets their preference to either enabled (1) or disabled (0). When the number gets dialed, the preference gets checked and then the CALLERID(name)/CALLERID(number) values are set accordingly. In order for the user to enable CID Blocking, they need to dial *81. It will stay enabled until they dial *82.

How do we accomplish this? Easy. The sounds come with the asterisk sounds package.

Open up your extensions.conf and add the following lines (to whichever context works for you):

; Enable CallerID Blocking for the dialing extension
exten => *81,1,Set(DB(CIDBlock/${CHANNEL:4:4})=1)
exten => *81,2,Playback(privacy-your-callerid-is)
exten => *81,3,Playback(enabled)
exten => *81,4,Hangup()

; Disable CallerID Blocking for the dialing extension
exten => *82,1,Set(DB(CIDBlock/${CHANNEL:4:4})=0)
exten => *82,2,Playback(privacy-your-callerid-is)
exten => *82,3,Playback(disabled)
exten => *82,4,Hangup()

The last modification that needs to happen is that you have to change the exten that dials out to check the DB and react accordingly. Here is a snippet of mine (with the numbers changed to protect the innocent):

; Outbound calling for 111.222.3456 (my phone number)
exten =>_1NXXNXXXXXX,1,Set(CIDBlock=${DB(CIDBlock/${CHANNEL:4:4})})
exten =>_1NXXNXXXXXX,2,Set(${IF($[${CIDBlock} = 1]?CALLERID(name)=Unavailable:CALLERID(name)=MyCompany)})
exten =>_1NXXNXXXXXX,3,Set(${IF($[${CIDBlock} = 1]?CALLERID(number)=0-1-999:CALLERID(number)=1112223456)})
exten =>_1NXXNXXXXXX,4,DIAL(SIP/provider/${EXTEN},60,tr)
exten =>_1NXXNXXXXXX,5,Hangup

That’s really all it takes to set it up. Quick and handy.

* Note: *81 & *82 are arbitrary number combinations. Adjust to what works for you.

If you’re feeling really frisky, I added this AEL extension to check the status of your CallerID Blocking on *83. For fun, I have also included my *67 script for those who need an idea of how its done. As with almost anything in Asterisk, there are many ways to do it, this is just how I chose to accomplish this.

// Extra's for sending things outbound
context outbound-extra {
   *83 => {
            Playback(privacy-your-callerid-is);
            Set(CIDBlock=${DB(CIDBlock/${CHANNEL:4:4})});
            if(${CIDBlock} == 1) {
                Playback(enabled);
            }
            else {
                Playback(disabled);
            }   
            Hangup();
   };

   *67 => {
      // Remove the *67 from the number we are dialing
      Set(dialed_number=${EXTEN:3:11});
      Set(CALLERID(name)=Unavailable):
      Set(CALLERID(number)=0-1-999):
      DIAL(SIP/provider/${dialed_number},60,tr);
      Hangup();
   };
};

Asterisk *69 with 1.4.x

Monday, July 2nd, 2007

Many phone users just take for granted the service provided by the Telco called *69 (pronouced “Star six nine”). Since Asterisk is a telephony platform, it doesn’t just come with *69 built in. So if you want it, you have to implement it. To save you some time, I have implemented it with a few tweaks. This setup works, but YMMV (your mileage may vary).

The concept of the operation here is as follows: When a call comes in, we grab the caller id number and store it in the Asterisk DB. When a user makes an outgoing call to *69, we then get that last phone number that called in from the AstDB and dial it using our standard Dial() function. I will get deeper into each phase as I go through the process.

Just to make this all a little clearer, I will say that the context for making outgoing calls is outbound and the context for internal calls is internal.

1. The first thing to do is to modify the actions a call takes when it comes to the phone. Assuming the first line of your dialplan for the phone was:

1
exten => _1[0]XX,1,Dial(SIP/${EXTEN},21)

This would take the call and send it to which ever SIP peer matched (be it 1000, 1001, etc). To ensure that only non-internal calls get saved to our AstDB, I have added a statement to avoid calls coming from the internal context. This is our new step 1. If the call comes from the internal context, goto step 3.

1
exten => _1[0]XX,1,GotoIf($["${CONTEXT}" = "internal"]?3)

If not, continue on to our step 2. Here we are going to make use of the DB(family/key) function. (Note: For those who had trouble with this function (like me), the family is like a table and they key is like a column name). I set the family name to LastCIDNum and the key to be the receiving caller’s extension. The value was set to the callerid number. This was done as follows:

1
exten => _1[0]XX,2,Set(DB(LastCIDNum/${EXTEN})=${CALLERID(number)})

I then move the original Dial back to step 3. Our final internal product looks something like this:

1
2
3
4
5
6
7
exten => _1[0]XX,1,GotoIf($["${CONTEXT}" = "internal"]?3)
exten => _1[0]XX,2,Set(DB(LastCIDNum/${EXTEN})=${CALLERID(number)})
exten => _1[0]XX,3,Dial(SIP/${EXTEN},21)
exten => _1[0]XX,4,Voicemail(${EXTEN}@voicemail,u)
exten => _1[0]XX,5,Hangup()
exten => _1[0]XX,102,Voicemail(${EXTEN}@voicemail,b)
exten => _1[0]XX,103,Hangup()

2. The next step is handle the outbound context for when a *69 call is placed. Assuming you don’t have an outbound dialing macro, we will handle this similarly to way an outbound SIP call would be placed. First we set the outbound callerid information:

1
2
exten => *69,1,Set(CALLERID(number)=12345678901)
exten => *69,2,Set(CALLERID(name)=MyCompany)

Then we grab the last caller id information obtained. It would probably be a good idea to check if its there and if its not set to anonymous or something along those lines, but that is something that would be relatively easy to implement after the basics are up and running. To obtain the caller id information from the AstDB, I use the ${CHANNEL} variable to get the callers extension for the query. I use the substring variable syntax to pull the 4 digit extension out of the ${CHANNEL} variable. I then stick it in a temporary variable that I can lastcall.

1
exten => *69,3,Set(lastcall=${DB(LastCIDNum/${CHANNEL:4:4})})

Once we have that information, we can dial out almost as normal. The one issue is that for all US calls, it doesn’t receive the 1 in the callerid(num). So in the Dial function, I add a 1 for domestic calls.

1
exten => *69,4,DIAL(SIP/yourprovider/1${lastcall},60,tr)

(Note: The record in the CDR gets added as the outbound dialed number, not *69.)
Our final product for the outbound context should look something like this:

1
2
3
4
5
6
7
8
exten =>exten => *69,1,Set(CALLERID(number)=12345678901)
exten => *69,2,Set(CALLERID(name)=MyCompany)
exten => *69,3,Set(lastcall=${DB(LastCIDNum/${CHANNEL:4:4})})
exten => *69,4,DIAL(SIP/paetec/1${lastcall},60,tr)
exten => *69,5,GotoIf(${DIALSTATUS} = CHANUNAVAIL,7)
exten => *69,6,GotoIf(${DIALSTATUS} = CONGESTION,7)
exten => *69,7,Hangup
exten => *69,101,Congestion

In order to see if your DB() calls are working properly, you can run the command database show from the Asterisk console. It will find all the keys entered in the family “LastCIDNum”. If these are all (or mostly) external phone numbers, then you have likely done this setup correctly.

Configuring a Cisco 7961 for SIP and Asterisk

Tuesday, May 29th, 2007

Just prior to writing this, I think I was about ready to kill someone. Setting up this phone was probably one of the most challenging things I have done in a long time. So this will be my attempt to explain to other’s what I did and I will hopefully save some people some time.

Since we all need to be on the same page, let’s start out with the conventions:

  • Asterisk: Gentoo Linux, 192.168.1.5
  • Workhorse: Gentoo Linux (DHCP, TFTP, NTP), 192.168.1.20
  • Phone: Cisco 7961
  • Anything starting with a $ means you put your value in it. I will name the variable something descriptive for you
  • Remember that all filenames with Cisco are case sensitive
  • If there are some files you need examples of or access to and aren’t listed, please don’t hesitate to contact me.

I am not going to go into a lot of detail with things, just give some overview and some examples and it will hopefully be enough to get you in the right direction. Check throughout the document for some references and read up on those if need be.

DISCLAIMER: I am not an expert. If you break your phone while doing anything I mention here, I am not responsible. This is just what I did to get everything to work.

1. The first order of business was to add the phone’s MAC address to DHCP so I could be sure what was accessing the tftp server. I also needed to know the MAC address to create the proper files in the tftp directory. Ensure that you set the tftp server, ntp server, and SIP server in DHCP.

group voip {
        option domain-name-servers 192.168.1.20, 1.2.3.4;
        option domain-name "inside.mycompany.com";
        option smtp-server 192.168.1.20;
        option ntp-servers 192.168.1.20;
        option time-servers 192.168.1.20;
        option routers 192.168.1.1;
        option sip-server 192.168.1.5;
        default-lease-time 86400; # 1 day
        max-lease-time 86400;
        server-name "192.168.1.20";
        option tftp-server-name "192.168.1.20";

        host myphone {
            hardware ethernet 00:19:E8:F4:B4:D0;
            fixed-address 192.168.1.200;
        }
}

2. When you first plug in the phone, it’s loaded with the Skinny protocol software only (SCCP), nothing for SIP. This is because the phone was designed to work best (and really only) with the Cisco Call Manager. The first thing I had to do was to obtain the files that go in the tftproot on 192.168.1.20. In the upgrade package were the files:

  • apps41.1-1-3-15.sbn
  • cnu41.3-1-3-15.sbn
  • copstart.sh
  • cvm41sip.8-0-3-16.sbn
  • dsp41.1-1-3-15.sbn
  • jar41sip.8-0-3-16.sbn
  • load115.txt
  • load30018.txt
  • load308.txt
  • load309.txt
  • SIP41.8-0-4SR1S.loads
  • term41.default.loads
  • term61.default.loads

3. Once you place these files in the tftp root directory, you are ready for the upgrade. (Note: You need a Cisco smartnet file (or be good with Google) to find these files). Upgrading requires a factory reboot of the phone so it will look for the term61.default.loads file. To perform a factory reset of the phone, hold down the ‘#‘ as the phone powers up. Then dial ‘123456789*0#‘ and then let it work. The next time it reboots, it should then grab the necessary files from the tftp server and upgrade itself. You can watch the tftp logs and the phones LCD to ensure that everything that is supposed to be happening is happening.

4. At this point, the phone should be able to completely boot up and will likely just show you the word Unprovisioned at the bottom of the screen. The next step is to create the files that each phone needs to survive. The first file we are going to create is the SEP$MAC.cnf.xml. In the case of the phone that I am going to use for this demo, the filename is: SEP0019E8F490AD.cnf.xml. I know that the phone is also requesting the file CTLSEP0019E8F490AD.tlv, but you can safely ignore that. The minimalist version of the SEP$MAC.cnf.xml file:

<device>
   <deviceProtocol>SIP</deviceProtocol>
   <sshUserId>cisco</sshUserId>
   <sshPassword>cisco</sshPassword>
   <devicePool>
      <dateTimeSetting>
         <dateTemplate>M/D/Ya</dateTemplate>
         <timeZone>Eastern Standard/Daylight Time</timeZone>
         <ntps>
              <ntp>
                  <name>192.168.1.20</name>
                  <ntpMode>Unicast</ntpMode>
              </ntp>
         </ntps>
      </dateTimeSetting>
      <callManagerGroup>
         <members>
            <member priority="0">
               <callManager>
                  <ports>
                     <ethernetPhonePort>2000</ethernetPhonePort>
                     <sipPort>5060</sipPort>
                     <securedSipPort>5061</securedSipPort>
                  </ports>
                  <processNodeName>192.168.1.5</processNodeName>
               </callManager>
            </member>
         </members>
      </callManagerGroup>
   </devicePool>
   <sipProfile>
      <sipProxies>
         <backupProxy></backupProxy>
         <backupProxyPort></backupProxyPort>
         <emergencyProxy></emergencyProxy>
         <emergencyProxyPort></emergencyProxyPort>
         <outboundProxy></outboundProxy>
         <outboundProxyPort></outboundProxyPort>
         <registerWithProxy>true</registerWithProxy>
      </sipProxies>
      <sipCallFeatures>
         <cnfJoinEnabled>true</cnfJoinEnabled>
         <callForwardURI>x--serviceuri-cfwdall</callForwardURI>
         <callPickupURI>x-cisco-serviceuri-pickup</callPickupURI>
         <callPickupListURI>x-cisco-serviceuri-opickup</callPickupListURI>
         <callPickupGroupURI>x-cisco-serviceuri-gpickup</callPickupGroupURI>
         <meetMeServiceURI>x-cisco-serviceuri-meetme</meetMeServiceURI>
         <abbreviatedDialURI>x-cisco-serviceuri-abbrdial</abbreviatedDialURI>
         <rfc2543Hold>false</rfc2543Hold>
         <callHoldRingback>2</callHoldRingback>
         <localCfwdEnable>true</localCfwdEnable>
         <semiAttendedTransfer>true</semiAttendedTransfer>
         <anonymousCallBlock>2</anonymousCallBlock>
         <callerIdBlocking>2</callerIdBlocking>
         <dndControl>1</dndControl>
         <remoteCcEnable>true</remoteCcEnable>
      </sipCallFeatures>
      <sipStack>
         <sipInviteRetx>6</sipInviteRetx>
         <sipRetx>10</sipRetx>
         <timerInviteExpires>180</timerInviteExpires>
         <timerRegisterExpires>3600</timerRegisterExpires>
         <timerRegisterDelta>5</timerRegisterDelta>
         <timerKeepAliveExpires>120</timerKeepAliveExpires>
         <timerSubscribeExpires>120</timerSubscribeExpires>
         <timerSubscribeDelta>5</timerSubscribeDelta>
         <timerT1>500</timerT1>
         <timerT2>4000</timerT2>
         <maxRedirects>70</maxRedirects>
         <remotePartyID>true</remotePartyID>
         <userInfo>None</userInfo>
      </sipStack>
      <autoAnswerTimer>1</autoAnswerTimer>
      <autoAnswerAltBehavior>false</autoAnswerAltBehavior>
      <autoAnswerOverride>true</autoAnswerOverride>
      <transferOnhookEnabled>false</transferOnhookEnabled>
      <enableVad>false</enableVad>
      <preferredCodec>g711ulaw</preferredCodec>
      <dtmfAvtPayload>101</dtmfAvtPayload>
      <dtmfDbLevel>3</dtmfDbLevel>
      <dtmfOutofBand>avt</dtmfOutofBand>
      <alwaysUsePrimeLine>false</alwaysUsePrimeLine>
      <alwaysUsePrimeLineVoiceMail>false</alwaysUsePrimeLineVoiceMail>
      <kpml>3</kpml>
      <natEnabled>false</natEnabled>
      <natAddress></natAddress>
      <phoneLabel>LinkExperts</phoneLabel>
      <stutterMsgWaiting>1</stutterMsgWaiting>
      <callStats>true</callStats>
      <silentPeriodBetweenCallWaitingBursts>10</silentPeriodBetweenCallWaitingBursts>
      <disableLocalSpeedDialConfig>false</disableLocalSpeedDialConfig>
      <startMediaPort>16384</startMediaPort>
      <stopMediaPort>32766</stopMediaPort>
      <sipLines>
         <line button="1">
            <featureID>9</featureID>
            <featureLabel>100</featureLabel>
            <proxy>192.168.0.205</proxy>
            <port>5060</port>
            <name>100</name>
            <displayName>Eric Lubow</displayName>
            <autoAnswer>
               <autoAnswerEnabled>2</autoAnswerEnabled>
            </autoAnswer>
            <callWaiting>3</callWaiting>
            <authName>100</authName>
            <authPassword></authPassword>
            <sharedLine>false</sharedLine>
            <messageWaitingLampPolicy>1</messageWaitingLampPolicy>
            <messagesNumber>*97</messagesNumber>
            <ringSettingIdle>4</ringSettingIdle>
            <ringSettingActive>5</ringSettingActive>
            <contact>100</contact>
            <forwardCallInfoDisplay>
               <callerName>true</callerName>
               <callerNumber>true</callerNumber>
               <redirectedNumber>false</redirectedNumber>
               <dialedNumber>true</dialedNumber>
            </forwardCallInfoDisplay>
         </line>
      </sipLines>
      <voipControlPort>5060</voipControlPort>
      <dscpForAudio>184</dscpForAudio>
      <ringSettingBusyStationPolicy>0</ringSettingBusyStationPolicy>
      <dialTemplate>dialplan.xml</dialTemplate>
   </sipProfile>
   <commonProfile>
      <phonePassword></phonePassword>
      <backgroundImageAccess>true</backgroundImageAccess>
      <callLogBlfEnabled>1</callLogBlfEnabled>
   </commonProfile>
   <loadInformation>SIP41.8-0-4SR1S</loadInformation>
   <vendorConfig>
      <disableSpeaker>false</disableSpeaker>
      <disableSpeakerAndHeadset>false</disableSpeakerAndHeadset>
      <pcPort>1</pcPort>
      <settingsAccess>1</settingsAccess>
      <garp>0</garp>
      <voiceVlanAccess>0</voiceVlanAccess>
      <videoCapability>0</videoCapability>
      <autoSelectLineEnable>0</autoSelectLineEnable>
      <webAccess>1</webAccess>
      <spanToPCPort>1</spanToPCPort>
      <loggingDisplay>1</loggingDisplay>
      <loadServer></loadServer>
   </vendorConfig>
   <versionStamp>1143565489-a3cbf294-7526-4c29-8791-c4fce4ce4c37</versionStamp>
   <networkLocale>US</networkLocale>
   <networkLocaleInfo>
      <name>US</name>
      <version>5.0(2)</version>
   </networkLocaleInfo>
   <deviceSecurityMode>1</deviceSecurityMode>
   <authenticationURL></authenticationURL>
   <directoryURL></directoryURL>
   <idleURL></idleURL>
   <informationURL></informationURL>
   <messagesURL></messagesURL>
   <proxyServerURL>proxy:3128</proxyServerURL>
   <servicesURL></servicesURL>
   <dscpForSCCPPhoneConfig>96</dscpForSCCPPhoneConfig>
   <dscpForSCCPPhoneServices>0</dscpForSCCPPhoneServices>
   <dscpForCm2Dvce>96</dscpForCm2Dvce>
   <transportLayerProtocol>4</transportLayerProtocol>
   <capfAuthMode>0</capfAuthMode>
   <capfList>
      <capf>
         <phonePort>3804</phonePort>
      </capf>
   </capfList>
   <certHash></certHash>
   <encrConfig>false</encrConfig>
</device>

5. You will also need to create a dialplan so the phone doesn’t try to dial immediately. Below is a minimalist dialplan.xml (which is the filename we used in the above schema).

<DIALTEMPLATE>
  <TEMPLATE MATCH="." TIMEOUT="5" User="Phone" />
  <TEMPLATE MATCH="2500" TIMEOUT="2" User="Phone" />
  <TEMPLATE MATCH=".97" TIMEOUT="2" User="Phone" />
  <TEMPLATE MATCH="5..." TIMEOUT="2" User="Phone" />
  <TEMPLATE MATCH="1.........." TIMEOUT="2" User="Phone" />
</DIALTEMPLATE>

6. Although I am still not entirely sure that you need them, here are 2 other files that I was told need to be referenced:
SIPDefault.cnf:

# Image Version
image_version: "P0S3-08-6-00"

# Proxy Server
proxy1_address: "192.168.1.5"

# Proxy Server Port (default - 5060)
proxy1_port:"5060"

# Emergency Proxy info
proxy_emergency: "192.168.1.5" # IP address here alternatively
proxy_emergency_port: "5060"

# Backup Proxy info
proxy_backup: "192.168.0.205"
proxy_backup_port: "5060"

# Outbound Proxy info
outbound_proxy: ""
outbound_proxy_port: "5060"

# NAT/Firewall Traversal
nat_enable: "false"
nat_address: "192.168.1.5"
voip_control_port: "5061"
start_media_port: "16384"
end_media_port: "32766"
nat_received_processing: "0"

# Proxy Registration (0-disable (default), 1-enable)
proxy_register: "1"

# Phone Registration Expiration [1-3932100 sec] (Default - 3600)
timer_register_expires: "3600"

# Codec for media stream (g711ulaw (default), g711alaw, g729)
preferred_codec: "none"

# TOS bits in media stream [0-5] (Default - 5)
tos_media: "5"

# Enable VAD (0-disable (default), 1-enable)
enable_vad: "0"

# Allow for the bridge on a 3way call to join remaining parties upon hangup
cnf_join_enable: "1" ; 0-Disabled, 1-Enabled (default)

# Allow Transfer to be completed while target phone is still ringing
semi_attended_transfer: "0" ; 0-Disabled, 1-Enabled (default)

# Telnet Level (enable or disable the ability to telnet into this phone
telnet_level: "2" ; 0-Disabled (default), 1-Enabled, 2-Privileged

# Inband DTMF Settings (0-disable, 1-enable (default))
dtmf_inband: "1"

# Out of band DTMF Settings (none-disable, avt-avt enable (default), avt_always - always avt ) dtmf_outofband: "avt" ~np~# DTMF dB Level Settings (1-6dB down, 2-3db down, 3-nominal (default), 4-3db up, 5-6dB up)
dtmf_db_level: "3"

# SIP Timers
timer_t1: "500" ; Default 500 msec
timer_t2: "4000" ; Default 4 sec
sip_retx: "10" ; Default 11
sip_invite_retx: "6" ; Default 7
timer_invite_expires: "180" ; Default 180 sec

# Setting for Message speeddial to UOne box
messages_uri: "*97"

# TFTP Phone Specific Configuration File Directory
tftp_cfg_dir: "./"
# Time Server
sntp_mode: "unicast"
sntp_server: "ntp2.usno.navy.mil" # IP address here alternatively
time_zone: "EST"
dst_offset: "1"
dst_start_month: "April"
dst_start_day: ""
dst_start_day_of_week: "Sun"
dst_start_week_of_month: "1"
dst_start_time: "02"
dst_stop_month: "Oct"
dst_stop_day: ""
dst_stop_day_of_week: "Sunday"
dst_stop_week_of_month: "8"
dst_stop_time: "2"
dst_auto_adjust: "1"

# Do Not Disturb Control (0-off, 1-on, 2-off with no user control, 3-on with no user control)
dnd_control: "0" ; Default 0 (Do Not Disturb feature is off)

# Caller ID Blocking (0-disabled, 1-enabled, 2-disabled no user control, 3-enabled no user control)
callerid_blocking: "0" ; Default 0 (Disable sending all calls as anonymous)

# Anonymous Call Blocking (0-disbaled, 1-enabled, 2-disabled no user control, 3-enabled no user control)
anonymous_call_block: "0" ; Default 0 (Disable blocking of anonymous calls)

# Call Waiting (0-disabled, 1-enabled, 2-disabled with no user control, 3-enabled with no user control)
call_waiting: "1" ; Default 1 (Call Waiting enabled)

# DTMF AVT Payload (Dynamic payload range for AVT tones - 96-127)
dtmf_avt_payload: "101" ; Default 100

# XML file that specifies the dialplan desired
dial_template: "dialplan"

# Network Media Type (auto, full100, full10, half100, half10)
network_media_type: "auto"

#Autocompletion During Dial (0-off, 1-on [default])
autocomplete: "0"

#Time Format (0-12hr, 1-24hr [default])
time_format_24hr: "1"

# URL for external Phone Services
services_url: "" # IP address here alternatively

# URL for external Directory location
directory_url: "" # IP address here alternatively

# URL for branding logo
logo_url: "" # IP address here alternatively

# Remote Party ID
remote_party_id: 1 ; 0-Disabled (default), 1-Enabled

and
XmlDefault.cnf.xml

<Default>
<callManagerGroup>
    <members>
       <member priority="0">
          <callManager>
             <ports>
                <ethernetPhonePort>2000</ethernetPhonePort>
                <mgcpPorts>
                   <listen>2427</listen>
                   <keepAlive>2428</keepAlive>
                </mgcpPorts>
             </ports>
             <processNodeName></processNodeName>
          </callManager>
       </member>
    </members>
 </callManagerGroup>
<loadInformation30018 model="IP Phone 7961">P0S3-08-6-00</loadInformation30018>
<loadInformation308 model="IP Phone 7961G-GE">P0S3-08-6-00</loadInformation308>
<authenticationURL></authenticationURL>
<directoryURL></directoryURL>
<idleURL></idleURL>
<informationURL></informationURL>
<messagesURL></messagesURL>
<servicesURL></servicesURL>
</Default>

This should be all the examples and information that you need to get going with your Cisco 7961(|G|GE) phone. Simplicity at it’s finest, eh Cisco?

UPDATE (3/11/12): Thanks to Ken Alker for letting me know that natEnabled now only accepts true/false and no longer 1/0. I’ve updated it on the page.

Asterisk Echo Cancellation

Thursday, May 17th, 2007

I was lucky (or unlucky) enough to have to rebuild my company’s Asterisk server to prepare to have a backup. I took a slightly less powerful machine, installed Debian Etch on it and threw Asterisk 1.2.13 on it. The goal was to mimic the Asterisk configuration on its sister machine which was a Gentoo 1.2.10 install (eventually to be upgraded to 1.4.4). The FXO cards in both machines are exactly the same. They are the TDM400P. Everything went smoothly except when I got the computer in place, the echo that came in was unbearable.

This brings me to the topic title. There is a lot of information that one needs to know to be good at this. And like usual, I didn’t have time to get to it all. Therefore I will show a summary of commands and tasks and provide you with a few links that helped me out. Long story short, read the docs so you don’t completely mess everything up.

It should also be noted that the config files that some of these options reside in may differ slightly depending on your configuration.

One of the first things that should be done is to run fxotune. Ensure Asterisk isn’t running when you run this and beware because it took approximately 20mins to run on my P4 2.8GHz w/ 256M RAM. Run it using the following command:

1
fxotune -i 4

The eventual result came out to be below in my /etc/fxotune.conf. Just be sure that run

1
fxotune -s

before starting Asterisk so your settings get used.

1
2
3
4
1=8,0,0,0,0,0,0,0,0
2=6,0,0,0,0,0,0,0,0
3=6,0,0,0,0,0,0,0,0
4=6,0,0,0,0,0,0,0,0

That did a good job, but it just wasn’t where it needed to be yet. The next thing I did should have been the first thing I did. Ensure all the telephone wires are as short as possible (while still being long enough to serve their purpose) and are away from all sources of power. This helped with the slight hum I would hear on some calls (this whole scenario is scary to me and thankfully it will be fixed shortly).

Next I moved on to the echo cancellation internals that Asterisk has. First, in the phone.conf, I changed the variable echocancel to high. Obviously, you should step through the possible values incrementally, but mine was already set on medium, so high was the next logical value.

Most of the work was done here in the zapata.conf file. The first value that I tinkered with here is the echocancel (same name as in the phone.conf file). It was initially set to yes which means that Asterisk automatically defaults to a value of 128 taps. Knowing that this variable has to be a power of 2, I decided to have some fun. I started at 2 and went straight up through 256. As it turned out, 64 ended up being the best. They say it is impossible to hear with the normal ear, but there was enough of a difference that I was able to discern which sounded better. Sometimes I couldn’t hear a difference at all (which is what I assume the docs were referencing), but as soon as I heard the difference between 64 and 128, I left it at 64. The last variable I toyed with was echotraining. echotraining was off initially. I tried calls with it on and off and there was a significant difference in initial call quality when echotraining was on. If these didn’t work, I would have messed with the value of the jitterbuffers. However, it is sufficient at 10 because of the small amount of memory that this machine has.

Eventually, once I have more time (and all sysadmins know this is a rarity), I want to move on to adjusting the rxgain/txgain. More information on this can be found here. I didn’t have a need for it now since I have reached what is believed to be tolerable. But ultimately I don’t want to have to deal with this again and I want to finish the job.

Hope this allows at least one person to have a one stop shop for information on echo cancellation and can save them a long night or headache.

References:

Joe Job and SPF

Tuesday, March 27th, 2007

First off, get your mind out of the gutter. A joe job has absolutely nothing to do with what you’re thinking about. It’s email related and it can be a pain in the ass to deal with.

What is a Joe Job?
Joe Job is the term used to describe the act of forging bulk email to appear to the recipient as if the email were coming from the victim. Generally speaking, this term is used to describe an attack of this nature. This is to say that when a spambot or botnet sends a massive amount of email to a victim. The named was coined by an attack launched against http://www.joes.com/ in January of 1997. The perpetrator (SPAMMER) sent a flood of emails from spoofed addresses in a (successful) attempt to enrage the recipients to take action against the company.

Why do I care?
There are many reasons, but I will just cover a few until you get the picture. The main victim of a SPAM attack of this nature ends up having an INBOX full of junk. This junk can potentially include malware, virii, and any number of phishing or scam based attacks. Also, since there is so much email traversing the connection, the bandwidth gets sucked up and depending on the actual amount of SPAM coming in, could render the connection unusable until all the mail is filtered through. The problem comes in when there are thousands of messages, that could take days or even weeks. Since the originating address is spoofed, those who don’t know are going to get very upset with who they *believe* to be responsible for sending the email. The last item I am going to touch on is that the person whose email address was spoofed now has to deal with all the auto-responses and whatever else may automatically come their way. (I think you get the idea).

What I can do?
There is nothing that you can do to completely avoid it besides not using the internet or email. There are some steps that you can take. One of the first things is to take a look at SPF (Sender Policy Framework). To set this up in DNS, you need to do the following:

In your DNS zone file for server.com, you should add something like the following:

1
server.com.  IN TXT    "v=spf1 a mx -all"
  • v – The version of SPF to use
  • a mx – The DNS attributes permitted to send messages for server.com
  • -all – Reject everything else that does match a or mx

This can also get more in depth depending on the number of email accounts you have and from where. For instance, let’s say your mail server’s name is mail.server.com and you also have email accounts on gmail (gmail.com)and your work email (myjob.com). Your line would look something similar to the following:

1
server.com.   IN   TXT   "v=spf1 mx a:mail.server.com include:gmail.com include:myjob.com -all"

The a line is saying that mail.server.com is authorized to send mail via your mail server. The include statements are basically saying that everything considered legitimate by either gmail.com or myjob.com should also be considered legitimate by you.

There is a lot more information on configuring SPF. The documentation should be read thoroughly as improperly configured SPF can prevent legitimate email from flowing. For more information of SPF and configuring it, check out:

SPF is just one method that can be used to fight against being a victim of a Joe job. You should always be using some method of SPAM filtering in addition to SPF. Layered security needs to be the approach when locking down any type of server or service.

File::ReadBackwards

Thursday, March 15th, 2007

Description: File::ReadBackwards works similar to the linux shell command tac. It reads the file line by line strarting from the end of the file.

CPAN: File::ReadBackwards

Example 1:
Being a System’s Administrator, I am usually doing some analysis on a large logfile. Therefore, I may not need all the information contained in the log. This may be especially true if the logs only get rotated once a day or once a week and I don’t need all the information in the log file. Using File::ReadBackwards in combination with a date and time calculation module, I can take only the amount of time I want to use from the logs and then stop processing there. Since we aren’t covering the date calculations here, I will push those out to another subroutine that we will assume works.

# Always use these
use strict;
use warnings;

# Use the module itself
use File::ReadBackwards;

# Define the log file to be read
my $log = "/var/log/log_file";

# Open the logfile by tie'ing it to the module
tie *LOG, "File::ReadBackwards", "$log"
   or die ("$log tie error: $!");

# Iterate over the logfile
while (my $line = ) {

  # Split the log line
  my @entry = split(/\s+/, $line);

  # Take the timestamp and check if we
  #   have hit our threshold yet
  # Break loop if we have
  last if (time_reached($entry[0]) == 1);
}

# Cleanup
untie (*LOG);

Building Telephony Systems With Asterisk

Monday, March 12th, 2007

Date: 12 Mar 2007
The next generation in telephony in combination with FOSS (Free and Open Source Software) is Asterisk. With the Open Source community revolutionizing telephony, Askterisk is the forging the way ahead. If you don’t know what Asterisk is, then you are going to be left behind.

Vitals:
Title Building Telephony Systems With Asterisk
Author David Gomillion & Barrie Dempster
Pages 176
ISBN 1904811159
Publisher Packt Publishing
Edition 1st Edition
Purchase Amazon

Audience:
If you are looking for a way to save money on a phone system, how to deploy Asterisk either in a business or a personal environment, then this is the book for you. Even if you are just looking for find out more about VoIP, Voicemail systems, or a foundation of how telephony works, then this is a must read. If you have an existing Asterisk system and are looking for ways to tweak it or make it more efficient, then you need this book to take you through the first steps. This book caters more to those with less experience with Asterisk.

Summary:
As with any introduction to a new system, the most vital questions are; What is it? and Is it for me? The authors of this book discuss the background for what it (Asterisk) is in great detail in the first chapter. Then they answer the second question by discussing both the pros and the cons from many perspectives. Assuming that you have decided that Asterisk is the solution for you (based on the information in chapter 1), it’s time to look into deploying an Asterisk. First it is necessary to take stock of what you have to work with and what your capabilities are. The authors discuss the various telephony capabilities ranging from POTS, Ts (and frame relay), and ISDNs for the medium and then move on to SIP, IAX, H.323, and others for the software protocols. The last part of the planning stage is determining what you need and how to make it scalable. Given various scenarios of initial stages and growth, the authors begin alluding to dial plans, extensions, and some of the other aspects that make Asterisk so versatile.

Chapter 3 starts right from the basic installation of Asterisk and familiarization with the configuration files. So as not to waste too much time on building programs from source, the authors move right into the actual configuration. This is one of the places where the book excels. Since Asterisk is a very configurable program, it has many configuration files and configuration items. The authors take the time to go through, at least basically, each one of the major configuration files. First they start with the zaptel.conf and the zapata.conf for the hardware. Then its time to move onto the software configuration where we configure sip.conf and iax.conf. Now its on to one of the most important aspects of our Asterisk configuration, voicemail.conf. The chapter is then finished up with some of the more interesting aspects of Asterisk like queues, conference rooms, and music on hold.

Now that the Asterisk base has been installed, the authors walk you through configuring the dialplan. This is where Asterisk’s power really shows through. There are many advanced features covered here like call parking, direct inward dialing, automated attendents, and other advanced call distribution mechanisms. The author’s then discuss different methods of logging (CDR – Call Detail Records). Also covered were the ability to record and monitor calls (and even have a legal issues note).

Since one of the best features of Asterisk is versatility. Asterisk @Home is decieving by name. Housed by CentOS Linux, Asterisk @Home provides for a more graphic based and user friendly configuration mechanism called AMP, the Asterisk Management Portal. This chapter covers the way to configure Asterisk @Home through AMP and how each configuration aspect is matched to the concepts covered in Chapter 4. They even show integration of Asterisk and SugarCRM, a widely used FOSS customer relationship management software.

The authors now come to my favorite way of teaching, real life application. They use multiple case studies as is a staple of authors for Packt publishing. There are explanations of a SOHO (Small Office/Home Office) setup, small business setup, and a hosted PBX setup. The book is then rounded up by explanations of maintainance, backup (and restore), and security. Many of the topics discussed with regards to security are general security topics such as host based security, rule based access control, and firewalling. The final notes discuss scalability and various support mechanisms for Asterisk.

Opinion:
Although I found this book slightly difficult to get through, it was jam packed with information. I was especially impressed with the way in which the authors covered the configuration files and the way in which they were explained. As always, I thoroughly enjoy the case studies and real life examples that are provided by the authors.

The one item which I feel wasn’t well covered in this book is call quality. It is generally well known that call quality with VoIP has a tendency to be a problem. Since Asterisk is a transport medium with the flexibility for many configuration tweaks, I think there should have been more discussion about call quality and its enhancement.

Overall, I found this book to be extremely helpful, although dry at times. There is a lot of material to be conveyed and the authors did their best under the circumstances. This book is an excellent starting point for anyone who needs to bring Asterisk into their world and needs to start from square one.

File::Bidirectional

Friday, March 9th, 2007

Description: The author of this module notes that it is best used, especially by him, when reading or manipulating log files. I have a tendency to use it for the exact same thing, especially when looking for context around captured lines.

CPAN: File::Bidirectional

Note:
Although I would like to note that using the tie’d interface as I have done takes approximately 2 1/2 times as long as a regular file read according to benchmarks, it is still a very handy tool and allows one not to reinvent the wheel.

Example 1:
Here we are going to go through a log file and when we hit the time stamp we want, we are going to change directions and go back through. There is no real reason to change direction here, I am merely demonstrating how it would be accomplished.

# Always use these
use strict;
use warnings;

# Use the module itself
use File::Bidirectional;

# Define the log file to be read
my $log = "/var/log/log_file";

# Open the logfile by tie'ing it to the module
#  This is exactly the same as File::ReadBackwards
tie *LOG, "File::Bidirectional", "$log", {mode => 'backward'}
   or die ("$log tie error: $!");

# Iterate over the logfile
while (my $line = ) {

  # Split the log line
  my @entry = split(/\s+/, $line);

  # Take the timestamp and check if we
  #   have hit our threshold yet
  # Get the line # then change direction
  if (time_reached($entry[0]) == 1) {
    $line_num = (tied *LOG)->line_num();
    (tied *LOG)->switch();
  }
}

# Cleanup
untie (*LOG);

Ensuring Proper New DST Compliance

Tuesday, March 6th, 2007

By now, if you haven’t heard about the change in daylight savings time (DST), then you need to break out from under your rock and update your servers. That’s where luckily, I come in. No, I won’t update your servers for you (unless you pay me and even then it’s iffy), but I will guide you along in updating them.

Before I jump headlong into helping you out, I feel as though (since it’s my blog anyway) that I just want to rant for a minute…

<rant>I heard that we tried this a while back and they had kids walking to school in the dark…which in and of itself isn’t good. So we figure, let’s try it again. If we can just save a little bit of money by everyone using 1 hour or so less power for 2 weeks, that everything would be dandy. I don’t think anyone took into account the man hours that it would take to update all the software and all the hardware and every little thing that uses dates and has relied on the system that we used for many years. And amusingly enough, if someone doesn’t get everything perfect with the updates for the power systems, then who knows, this may require people to use more power to make up for mistakes. Who’d have thought something like this has the possibility to backfire?</rant>

The first thing you need to do is to check to see if you are updated and ready to go. To do that, if you are in the EST/EDT timezone and your system isn’t updated, then it will look like this when you give it the following command:

1
2
3
4
5
# zdump -v /etc/localtime  | grep 2007
/etc/localtime  Sun Apr  1 06:59:59 2007 UTC = Sun Apr  1 01:59:59 2007 EST isdst=0 gmtoff=-18000
/etc/localtime  Sun Apr  1 07:00:00 2007 UTC = Sun Apr  1 03:00:00 2007 EDT isdst=1 gmtoff=-14400
/etc/localtime  Sun Oct 28 05:59:59 2007 UTC = Sun Oct 28 01:59:59 2007 EDT isdst=1 gmtoff=-14400
/etc/localtime  Sun Oct 28 06:00:00 2007 UTC = Sun Oct 28 01:00:00 2007 EST isdst=0 gmtoff=-18000

This is still on the old timezone setting since it says April 1. If that’s too confusing for you, here’s a handy dandy little one-liner that maybe a little more straightforward:

1
2
# zdump -v /etc/localtime  | grep 2007  | grep -q "Mar 11" && echo "New DST Compliant" || echo "Not New DST Compliant"
Not New DST Compliant

It will say “Not New DST Compliant” if your setup isn’t currently in compliance with the new DST setup.

To become DST compliant, all you really have to do is update your versions libc6 and locales. This update will ensure that your timezone data is up to date and your system will then respond to the time change accordingly. On a Debian, Ubuntu, or other apt based system, you will only have to do the following (note the ‘#’ means we are already root, if you are on Ubuntu, put a sudo in front of the command):

1
# apt-get update && apt-get install libc6 locales

Gentoo:

1
# emerge --sync && emerge sys-libs/timezone-data

If all went well, then you should be able to execute the same command as above and it will look like this:

1
2
3
4
5
# zdump -v /etc/localtime  | grep 2007
/etc/localtime  Sun Mar 11 06:59:59 2007 UTC = Sun Mar 11 01:59:59 2007 EST isdst=0 gmtoff=-18000
/etc/localtime  Sun Mar 11 07:00:00 2007 UTC = Sun Mar 11 03:00:00 2007 EDT isdst=1 gmtoff=-14400
/etc/localtime  Sun Nov  4 05:59:59 2007 UTC = Sun Nov  4 01:59:59 2007 EDT isdst=1 gmtoff=-14400
/etc/localtime  Sun Nov  4 06:00:00 2007 UTC = Sun Nov  4 01:00:00 2007 EST isdst=0 gmtoff=-18000

And if you are still being lazy, here is the shorthand command again and the output that you should get:

1
2
# zdump -v /etc/localtime  | grep 2007  | grep -q "Mar 11" && echo "New DST Compliant" || echo "Not New DST Compliant"
New DST Compliant

That should be all you have to do. Remember that if you aren’t in MY timezone (Eastern Time, USA), then your output will be a little different.

Update: The error in the Gentoo update has been fixed. Thanks to Neo for pointing it out.

Underused Tools

Monday, March 5th, 2007

There are a lot of tools for administration and networking that generally go unused. They are very helpful in both diagnostics and general administration. There are even some tools that come installed with linux and go unused and unheard of. Here I am going to cover a mere few of my favorite and hope that they work for you as well.

  1. traceproto
    The first tool I want to cover is one of my favorite tools when writing firewall scripts and is a close relative of traceroute; it’s called traceproto. traceproto doesn’t come installed by default on most linux systems. It is a replacement (or even just a complement) for traceroute that goes the extra mile. Like traceroute, you can change ports and ttl (time to live) on your queries. But the extra mile appears where you can specify whether to use tcp, udp, or icmp when you specify the ports. You can also specify the source port of the network traffic.
    The way that I make best use this tool is when I am writing firewall scripts. For instance, when I allow ntp through on a firewall, it can sometimes be difficult to test if my firewall rules are letting the packets through (since I have multiple levels of firewalls). Therefore, I use traceproto as follows (ntp is on udp port 123):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    root@tivo:~# traceproto -d 53 -p udp ns1.myserver.com
    traceproto: trace to ns1.myserver.com (1.2.3.4), port 53
    ttl  1:  ICMP Time Exceeded from 192.168.1.1 (192.168.1.1)
            0.83300 ms      0.67900 ms      0.71300 ms
    ttl  2:  ICMP Time Exceeded from 10.75.128.1 (10.75.128.1)
            11.577 ms       6.1550 ms       6.4960 ms
    ... Removed for brevity ...
    ttl  11:no response     no response     no response
    ttl  12:  UDP from myserver.com (1.2.3.4)
            132.07 ms       126.28 ms       125.88 ms

    hop :  min   /  ave   /  max   :  # packets  :  # lost
    -------------------------------------------------------
      1 : 0.67900 / 0.74167 / 0.83300 :   3 packets :   0 lost
      2 : 6.1550 / 8.0760 / 11.577 :   3 packets :   0 lost
      3 : 5.9680 / 7.0697 / 7.6650 :   3 packets :   0 lost
      4 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
      5 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
      6 : 8.8930 / 12.198 / 15.810 :   3 packets :   0 lost
      7 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
      8 : 9.2340 / 24.556 / 32.438 :   3 packets :   0 lost
      9 : 9.8230 / 13.669 / 18.890 :   3 packets :   0 lost
     10 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
     11 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
     12 : 125.88 / 128.08 / 132.07 :   3 packets :   0 lost
    ------------------------Total--------------------------
    total 125.88 / 22.834 / 132.07 :  21 packets :  15 lost
  2. pstree, pgrep, pidof
    Although these are 3 separate tools, they are all very handy for process discovery in their own right.

    To take advantage of of the pidof command, you just need to figure out which program you want to know about its family (parent and children). 2 ways to demonstrate this would be to use either kthread or apache2 as follows:

    1
    2
    3
    4
    # pidof apache2
    29297 29291 29290 29289 29245 29223 29222 29221 20441
    # pidof kthread
    6

    By typing pstree, you will see exactly what it is capable of. pstree outputs an ASCII graphic of the process list by separating it into parents and children. By adding the -u option to pstree, you can see if your daemons made their uid transitions. This is also an extremely useful program for displaying SELinux context of each process (by using the -Z option if pstree was built with it). To see the children of kthread which we found above was pid 6, we can use these commands in conjunction.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # pstree `pidof kthread`
    kthread-+-aio/0
            |-kacpid
            |-kblockd/0
            |-kgameportd
            |-khubd
            |-kmirrord
            |-kpsmoused
            |-kseriod
            `-2*[pdflush]

    And finally pgrep. There are many ways to make use of pgrep. It can be used like pidof:

    1
    2
    # pgrep -l named
    18935 named

    We can also list all processes that are being run that aren’t being controlled by controlling port 1 (pts/1):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    # pgrep -l -t pts/1 -v
    1 init
    2 ksoftirqd/0
    3 watchdog/0
    4 events/0
    ... Removed for brevity ...
    10665 getty
    18975 named
    19009 qmgr
    25447 sshd
    25448 bash
    29221 apache2
  3. tee
    There are sometimes commands that can take a long time to run. You want to see the output, but you also want to save it for later. How can we do that. We can use the tee command. This sends the output to STDOUT and send (or append) to a filehandle. For simplicity, I will show you an example of tee using an df.

    1
    df -h | tee -a snap_shot
  4. tac
    Everyone knows about cat, it’s what we use to list the entire contents of a file. cat has a little known cousin that is usually installed by default on a system called tac. It prints the entire contents of a file in reverse.
  5. fuser
    fuser displays the process id of all processes using the specified file or file system. This has many handy uses. If you are trying to unmount a partition and want to know why its still busy, then run fuser on the filesystem and find out which processes are still using the device. fuser is even nice enough to tell you what kind of files are using the files or file systems. For example, I want to umount /root/, but I can’t and I don’t know why:

    1
    2
    # fuser /root/
    /root:          29475c 29483c

    Hmm, c means that I am currently in the directory. Maybe I need to watch what I’m doing.

Most of these tools don’t fall into the same category, but they are all useful in their own right. I hope you can make as good use of them as I do. There are many more little known tools that come with many linux installs by default and this is a just a few of the common ones that I take advantage of on a regular basis.

Syslog-ng and Squid Logging

Monday, February 26th, 2007

Since there are a million HOWTOs on setting up remote logging with syslog-ng, I won’t bother going over it again. I will however take this moment to go into a little about how you can setup remote logging of your Squid servers. We are going to take advantage of some of the built in regex support of syslog-ng and also some other of the categorizing capabilities of syslog-ng.

Organization

Before we begin, I want to discuss a little about organization. It’s one of the things that I cover because I think it’s important. I won’t step up onto my soapbox as to why right now, but I will cover it some other time and it will relate to security and system administration which is what I know most of you are here for.

Keeping your logs organized allows programs like logrotate to do their job as well as log analysis scripts and even custom rolled scripts to do their jobs properly and efficiently. A part of organization is also syncronization. You should also ensure that NTP is properly setup so that the time’s on all log’s on the server and the client are in sync. Some log analysis programs are finicky and won’t work properly unless everything is in chronological order. Time fluctuations are also somewhat confusing to read if you are trying to do forensics on a server.

Squid Server Setup

Setting up your Squid server to do the loging and send it to a remote server is relatively easy. The first thing you need to do is to modify your squid.conf file to log to your syslog. Your squid.conf is generally located at /etc/squid/squid.conf. Find the line that begins with the access_log directive. It will likely look like this:

access_log /var/log/squid/squid.log squid

I recommend doing the remote logging as an addition to current local logging. Two copies are better than one, especially if you can spare the space and handle the network traffic. Add the following line to your squid.conf:

access_log syslog squid

This tells squid to create another access_log file, log it to the syslog in the standard squid logging format.

We also have to ensure that squid is not logged twice on your machine. This means using syslog-ng’s filtering capabilities to remove squid from being logged locally by the syslog. Edit your syslog-ng.conf file and add the following lines.

# The filter removes all entries that come from the
#   program 'squid' from the syslog
filter f_remove { program("squid"); };

# Everything that should be in the 'user' facility
filter f_user { facility(user); };

# The log destination should be the '/var/log/user.log' file
destination df_user { file("/var/log/user.log"); };

# The log destination should be sent via UDP
destination logserver { udp("logserver.mycompany.com"); };

# The actual logging directive
log {
    # Standard source of all sources
    source(s_all);

    # Apply the 'f_user' filter
    filter(f_user);

    # Apply the 'f_remove' filter to remove all squid entries
    filter(f_remove);

    # Send whatever is left in the user facility log file to
    #  to the 'user.log' file
    destination(df_user);

    # Send it to the logserver
    destination(logserver);
};

Without describing all the lines that should be in a syslog-ng.conf file (as one should read the manual to find that out), I will merely say that the s_all has the source for all the syslog possiblities.

Log Server Setup

Although setting up your logserver might be a little more complex then setting up your squid server to log remotely, it is also relatively easy. The first item of interest is to ensure that syslog-ng is listening on the network socket. I prefer to use UDP even though there is no guarantee of message delivery like with TCP. It allows for network traffic latency when transferring data across poor connections. Do this by adding the udp() to your source directive:

# All sources
source src {
        internal();
        pipe("/proc/kmsg");
        unix-stream("/dev/log");
        file("/proc/kmsg" log_prefix("kernel: "));
        udp();
};

Next you need to setup your destinations. This includes the destinations for all logs received via the UDP socket. As I spoke about organization already, I won’t beat a dead horse too badly, but I will show you how I keep my logs organized.

# Log Server destination
destination logs {
  # Location of the log files using syslog-ng internal variables
  file("/var/log/$HOST/$YEAR/$MONTH/$FACILITY.$YEAR-$MONTH-$DAY"

  # Log files owned by root, group is adm and permissions of 665
  owner(root) group(adm) perm(665)

  # Create the directories if they don't exist with 775 perms
  create_dirs(yes) dir_perm(0775));
};

We haven’t actually done the logging yet. There are still filters that have to be setup so we can see what squid is doing separate from other user level log facilities. We also have to ensure the proper destinations are created. Following along the same lines for squid,

# Anything that's from the program 'squid'
#  and the 'user' log facility
filter f_squid { program("squid") and facility(user); };

# This is our squid destination log file
destination d_squid {
  # The squid log file with dates
  file("/var/log/$HOST/$YEAR/$MONTH/squid.$YEAR-$MONTH-$DAY"
  owner(root) group(adm) perm(665)
  create_dirs(yes) dir_perm(0775));
};

# This is the actual Squid logging
log { source(src); filter(f_squid); destination(d_squid); };

# Remove the 'squid' log entries from 'user' log facility
filter f_remove { not program("squid"); };

# Log everything else less the categories removed
#  by the f_remove period
log {
        source(src);
        filter(f_remove);
        destination(logs);
        };

We have just gone over how one should organize basic remove logging and handle squid logging. Speaking as someone who has a lot of squid log analysis to do, centrally locating all my squid logs make log analysis and processing easier. I also don’t have to start transferring logs from machine to machine to do analysis. This is especially useful when logs like squid can be in excess of a few gigs per day.

Linux Firewalls and QoS

Thursday, February 15th, 2007

Date: 15 Feb 2007
There are complex and simple firewalls. They can be as simple or as in depth as one is willing to put the time and effort into learning and configuring them. The simple firewalls being to just allow or drop packets based on protocol or source or destination IP. The complex being that which deals with QoS (Quality of Service) or the L7 packet classification filter.

Vitals:
Title Designing and Implementing Linux Firewalls and QoS using netfilter, iproute2, NAT, and L7-filter
Author Lucian Gheorghe
Pages 288
ISBN 1-904811-65-5
Publisher Packt Publishing
Edition 1st Edition
Purchase Amazon

Audience:
In order to have a complete understanding of exactly how well this book covers each of the topics it delves into, one has to have a certain understanding of firewalls and the necessary uses for its components.

Summary:
As is reminiscent of many of the books written by authors for Packt Publishing, the first chapter begins with descriptions and re-introductions to many of the basic networking concepts. These include the OSI model, subnetting, supernetting, and a brief overview of the routing protocols. Chapter 2 discusses the need for network security and how it applies to each of the layers of the OSI model.

Chapter 3 is when we start to get into the nitty gritty of the routing, netfilter and iproute2. Here is where the basics of tc is covered including qdiscs, classes, and filterers. This is where the examples start coming. The real world examples used throughout the book are what makes the book easy enough to not only understand, but also apply to your network. Chapter 4 discusses NAT (Network Address Translation) and how it happens from within iptables. It also discuesses packet mangling and talks about the difference between SNAT (Source NAT) and DNAT (Destination NAT). The real life example in this chapter discusses how double NAT may need to be used when implementing a VPN (Virtual Private Network) solution between end points.

Layer 7 filtering is the topic of Chapter 5. Layer 7 filtering is a relatively new concept in the world of firewalling. The author tackles it right from square one. He talks about applying the kernel and IPTables patches (which have the potential to be very overwhelming concepts). One of the neat concepts that the author chooses to use in the example for this chapter is bandwidth throttling and traffic control for layer 7 protocols like bittorent (a notorious bandwidth user). He also covers some of the IPP2P matching concepts and contrasts it to using layer 7.

Now is where to get to the full fledged examples. The first is for a SOHO (Small Office Home Office). It covers everything from DHCP, to proxying to firewalling and even traffic shaping. Next is a medium size network case study. This includes multiple locations, servers providing similar functionality with redundency, virtual private networks, ip phones and other means of communication, and the traffic shaping and firewalling for all these services. He also discusses a small ISP example. The book finishes up by discussing large scale networks and creating the same aspects as for the medium and small sized networks. The difference is that now the ideas are spread across cities, Gigabit ethernet connections, ATM, MLPS and other high speed methods of high speed data transfer. There is even information on Cisco IOS and how their routers can be deployed in large scale networks. The lower level routing protocols like BGP and firewalling and routing servers like Zebra. And he finishes up with one of my favorite topics, “security.”

Opinion:
Although this book covers some of the most difficult topics with regard to the internet, networking, security, traffic shaping, and general network setup, it is handled very well. Each chapter begins with a summary of information that needs to be known and understood for the coming chapter. I was able to put this book to work immediately (even before finishing it) with the need to traffic shape the network traffic in an office which required better VoIP (Voice Over IP) support.

I would recommend this book to anyone and everyone who has any responsibility for a firewall or network of any kind. One of the best aspects of the book is how up to date it is. It uses the 2.6.12 kernel for applying the layer 7 kernel patches. The ideas and concepts in this book will be valid and current for a long time, especially since most of the major protocols that the book covers like bittorrent and other P2P applications that are prevalent in our networks. If you have anything to do with networking at all, I strongly suggest getting your hands on this book. If not to understand the networking and traffic shaping concepts, then at least for a reference.

A Few Apache Tips

Tuesday, February 13th, 2007

Last week I gave a few tips about SSH, so this week I think I will give a few tips about apache. Just to reiterate, these are tips that have worked for me and they may not be as efficient or as effective for your style of system administration.

Logging

I don’t know about anyone else, but I am a log junky. I like looking through my logs, watching what’s been done, who has gone where and so on and so on. But one of the things I hate is seeing my own entries tattooed all over my logs. This is especially true if I am just deploying a creation onto a production server (after testing of course). Apache2 comes with a few neat directives that allow the controlling of what goes into the logs:

1
        SetEnvIf  Remote_Addr   "192\.168\.1\."         local

This directive can go anywhere. For our purposes, it will be used in tandem with the logging directives. Let’s take the following code and break it down:

1
2
3
4
5
        SetEnvIf  Remote_Addr   "192\.168\.1\."         local
        SetEnvIf  Remote_Addr   "10\.1\.1\."         local
        SetEnvIf  Remote_Addr   "1\.2\3\.44"         local
        CustomLog               /var/log/apache2/local.log   common env=local
        CustomLog               /var/log/apache2/access.log  common env=!local

The first 3 lines are telling apache that if the server environment variable Remote_Addr matches either 192.168.1.*, 10.1.1., or 1.2.3.44, then the address should be considered a local address. (Note: that the ‘.’ (periods) are escaped with a backslash. This is how apache is turning the IP address into a regular expression (wildcarding). Do not forget the backslash ‘\’ otherwise the IPs will not match.) By itself, these statements mean nothing. When used within the custom logging environment, we can either include them or disclude them. Hence, our logging statements. The first logging statement is defining our local.log file. This will only have our entries that are from the IPs that we have listed as local. The second log entry will be our regular access log file. The main difference being that our access.log file will have none of our local IP accesses and will thus be cleaner. This is also handy if you use a log analyzer, you will have less excluding of IPs to do there because you are controlling what goes into the logs on the frontend.

Security

As with anything else I talk about, I will generally throw in a few notes about security. One of my favorite little apache modules is mod_security. I am not going to put a bunch of information about mod_security in this article as I have already written about it up on EnGarde Secure Linux here. Either way, take a look at it and make good use of it. This is especially the case if you are new to web programming and have yet to learn about how to properly mitigate XSS (Cross Site Scripting) vulnerabilities and other web based methods of attacks.

Google likes to index everything that it can get its hands on. This is both advantageous and disadvantageous at the same time. So for that reason, you should do 2 things:

  1. Turn off directory indexing where it isn’t needed:
    Everytime you have a directory entry. If you already have an index file (index.cgi, index.php, index.pl, index.html, etc), then you should have no need for directory indexes. If you don’t have a need for something, then shut it off. In the example below, I have removed the Index option to ensure that if there is no index file in the directory, that a 403 (HTTP Forbidden) error is thrown and not a directory listing that is accessible and indexable by a search engine.

    1
    2
    3
    4
    5
    6
    &lt;Directory /home/web/eric.lubow.org-80/html&gt;
                    Options -Indexes
                    AllowOverride None
                    Order allow,deny
                    allow from all
    &lt;/Directory&gt;
  2. Create a robots.txt file whenever possible:
    We all have files that we don’t like or don’t want other’s to see. That’s probably why we shut off directory indexing in the first place. Just as another method to not allow search engines to index it, we create a robots.txt file. Assuming we don’t want our test index html file to be indexed, we would have the following robots.txt file.

    1
    2
    User-agent: *
    Disallow: index.test.html

    This says that any agent that wants to know what it can’t index will look at the robots.txt file and see that it isn’t allowed to index the file index.test.html and will leave it alone. There are many other uses for a robots.txt file, but that is a very handy and very basic setup.

If you notice in the above example, I have also created a gaping security hole if the directory that I am showing here has things that shouldn’t be accible by the world. For a little bit of added security, place restrictions here that would normally be placed in a .htaccess. file. Change from:

1
Order allow,deny

to

1
2
Order deny,allow
allow from 192.168.1.     # Local subnet

This will allow only the 192.168.1.* C class subnet to access that directory. And since you turned off directory indexing, if the index file gets removed, then users in that subnet will not be able to see the contents of that directory. Just as with TCPWrappers, you can have as many allow from lines as you want. Just remember to comment then and keep track of them so they can be removed when they are no longer in use.

If you are running a production web server that it out there on the internet, then you should be wary of the information that can be obtained from a misconfigured page or something that may cause an unexpected error. When apache throws an error page or a directory index, it usually shows a version string, something similar to this:

1
Apache/2.0.58 (Ubuntu) PHP/4.4.2-1.1 Server at zeus Port 80

If you don’t want that kind of information to be shown (which you usually shouldn’t), then you should use the ServerSignature directive.
The ServerTokens directive is for what apache puts into the HTTP header. Normally an entire version string would go in there. If you have ServerTokens Prod in your apache configuration, then apche will only send the following in the HTTP headers:

1
Server: Apache

If you really want more granular control over what apache sends in the HTTP header, then make use of mod_security. You can change the header entirely should you so desire. You can make it say anything that you want which can really confuse a potential attacker or someone who is attempting to fingerprint your server.
With all this in mind, the following two lines should be applied to your apache configuration:

1
2
ServerSignature off
ServerTokens Prod
Organization

One of the other items that I would like to note is the organization of my directory structure. I have a top level directory in which I keep all my websites /home/web. Now below that, I keep a constant structure of subdomain.domain.tld-port/{html,cgi-bin,logs}. My top level web directory looks like this:

1
2
3
4
$ ls /home/web
eric.lubow.org-80
dev.lubow.org-80
gallery.lubow.org-80

Below that, I have a directory structure that also stays constant:

1
2
3
4
$ ls /home/web/eric.lubow.org-80
cgi-bin
html
logs

This way, every time I need to delve deeper into a set of subdirectories, I always know what the next subdirectory is without having to hit TAB a few times. Consistancy not only allows one to work faster, but allows one to stay organized.

Tuning

Another change I like to make for speed’s sake is to change the timeout. The default is set at 300 seconds (5 minutes). If you are running a public webserver (not off of dialup) and your requests are taking more than 60 seconds, then there is most likely a problem. The timeout shouldn’t be too low, but somewhere between 45 seconds (really on the low end) and 75 seconds is usually acceptable. I keep mine at 60 seconds. To do this, simply change the line from:

1
Timeout 300

to

1
Timeout 60

The other speed tuning tweak I want to go over is keep alive. The relevant directives here are MaxKeepAliveRequests and KeepAliveTimeout. Their default values are 100 and 15 respectively. The problem with tweaking these variables is that changing them too drastically can cause a denial of service for certain classes of clients. For the sake of speed since I have a small to medium traffic web server I have changed the values of mine to look as follows:

1
2
MaxKeepAliveRequests 65
KeepAliveTimeout 10

Be sure to read up on exactly what these do and how they can affect you and your users. Also check your logfiles (which you should now have a little bit more organization of) to ensure that you changes have been for the better.

Conclusion

As with the other articles, I have plenty more tips and things that I do, but here are just a few. Hopefully they have helped you. If you have some tips that you can offer, let me know and I will include them in a future article.

SSH Organization Tips

Friday, February 9th, 2007

Over the years, I have worked with many SSH boxen and had the pleasure to manage even more SSH keys. The problem with all that is the keys start to build up and then you wonder which boxes have which keys in the authorized keys file and so on and so on. Well, I can’t say I have the ultimate solution, but I do have a few tips that I have come across along the way. Hopefully they will be of use to someone else besides myself.

  1. Although this should hopefully already be done (my fingers are crossed for you), check the permissions on your ~/.ssh directory and the file contained in it.
    1
    2
    3
    $ chmod 700 ~/.ssh
    $ chmod 600 ~/.ssh/id_dsa
    $ chmod 640 ~/.ssh/id_dsa.pub
  2. Now that SSHv2 is pretty widely accepted, try using that for all your servers. If that isn’t possible, then try to use SSHv2 whenever possible. This means a few things.
    1. Change your /etc/ssh/sshd_config file to say:
      1
      Protocol 2

      instead of

      1
      Protocol 1
    2. Don’t generate anymore RSA keys for yourself. Stick to the DSA keys:
      1
      2
      $ cd ~/.ssh
      $ ssh-keygen -t dsa
    3. Use public key based authentication and not password authentication. To do this change your /etc/ssh/sshd_config file to read:
      1
      PubkeyAuthentication yes

      instead of

      1
      PubkeyAuthentication no
  3. Keeping track of which keys are on the machine is a fairly simple yet often incomplete task. To allow for a user to login using their SSH(v2) key, we just add their public key to the ~/.ssh/authorized_keys file on the remote machine:
    1. Copy the file to the remote machine:
      1
      $ scp id_dsa.pub user@host:.ssh/
    2. Append the key onto the authorized_keys file:
      1
      $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

    Before moving on here and just deleting the public key, let’s try some organizational techniques.

    1. Create a directory in ~/.ssh to store the public keys in:
      1
      $ mkdir ~/.ssh/pub
    2. Move the public key file into that directory and change the name to something useful:
      1
      $ mv ~/.ssh/id_dsa.pub ~/.ssh/pub/root@main.mydomain.com
    3. NOTE: Don’t do this unless you are sure that you can log in with your public key otherwise you WILL lock yourself out of your own machine.

  4. Now a little of the reverse side of this. If a public key is no longer is use, then you should remove it from your ~/.ssh/authorized_keys file. If you have been keeping a file list in the directory, then the file should be removed from the directory tree as well. A little housekeeping is not only good for security, but also some piece of mind in efficiency and general cleanliness.
  5. Although this last item isn’t really organizational, it is just really handy and I will categorize it under the title of efficiency. Using ssh-agent to ssh around. If you are a sysadmin and you want to only type your passphrase once when you login to your computer, then do the following:
    1. Check to see if the agent is running:
      1
      $ ssh-add -L

      NOTE: If ssh-agent is not running, it will say The agent has no identities.

    2. If its running, continue to the next step, otherwise type:
      1
      $ ssh-agent
    3. Now to add your key to the agent’s keyring, type:
      1
      $ ssh-add

    SSH to a machine that you know has that key and you will notice that you will no longer have to type in your passphrase while your current session is active.

These are just some tricks that I use to keep things sane. They may not work for you, but some of them are good habits to get into. Good luck.

Super Security vs. Ease of Use

Monday, February 5th, 2007

I think I am going to get back onto my soapbox about being extraordinarily secure, only this time, I am going to compare it to ease of use. I would once again like to reiterate the fact that I am strongly for security in all its aspects. However, some people get into the presence of a security individual and freak out. They start saying that they know certain things are secure and insecure and then do them anyway.

A great example of this is SSH access to servers. Consider the following physical network layout.

                      ,---------- Server I
Inet---------|F/W|---+----------- Server II
                      `----------- Server III

If you want to leave certain nice to do’s or ease of use functionality available to your self such as leaving SSH open only to root or having a machine with anonymous FTP access available, then take a slightly different approach to securing your environment (or those particular machines): layered security. Without changing the physical layout of your network, change the network layout using iptables and/or tcp wrappers. Make the network look more like this:

                             ,------Server II
Inet-----|F/W|----Server I--<
                             `------Server III

This is essentially saying that all traffic that you want to funnel to Server II or Server III will now go through server I. This can be used in a variety of ways. Let's say that all 3 servers are in the DMZ (De-Militarized Zone) and that Server's II and III are hosting the services available to the outside. Allowing direct access to them via SSH or FTP probably isn't the best idea because these are services that allow files to be changed. So what can we do to change this?

First let's configure TCP wrappers. Starting out with the assumption that you are being paranoid where necessary, let's set SSH up to only allow incoming connections from Server I and deny from everywhere else. In the /etc/hosts.deny file on Server II, add the following lines (everything that goes for Server II goes for Server III to mimic the setup):

1
2
sshd1: ALL
sshd2: ALL

Now in the /etc/hosts.allow file on Server II, add the IP address of Server I (IPs are all made up):

1
2
sshd1: 167.4.5.23
sshd2: none

This now ensures that the only way SSH traffic will be allowed Server II is through (or from) Server I. But let's say that isn't enough for you. Let's say you want a little more security so you can sleep a little better at night. Enter netfilter and IPTables. On Server II, add the following to your firewall script:

1
2
3
4
iptables -A INPUT -p tcp --dport 22 -j SSH  # Send to the SSH chain
iptables -A SSH -s 167.4.5.23/32 -j ACCEPT # Allow from Server I
iptables -A SSH -m limit --limit 5/s -j LOG # Log the problems
iptables -A SSH -j DROP # Drop everything else

So what's the point of all this extra configuration? Simple, it allows for a little more flexibility when it comes to your setup. Although I recommend having SSH keys and not allowing direct root access from anywhere, you can get away with a little more. You can allow root access via an SSH key. And if you have enough protections/layers/security in place, you may also even consider using a password-less SSH key depending on what the role of the contacting server is (ie. rsync over SSH).

In the optimal version of a network setup with SSH, you may want to only allow user access to Server II via entry only through Server I. Then to top it off, only allow sudo access to root if the user is in the sudoers file. This throws a lot more protections behind Server II, but makes it somewhat complicated to just do something simple. This is especially true if Server II is on an internal network which isn't very accessible anyway. The advice I generally give is in the form of the question, "What is tradeoff?" More times than not, ease of use the answer. So keeping in mind that ease of use isn't out of the realm of possibility, just remember that layered security can be your friend. It's usually how the tradeoff comes.

Some Introduction

Friday, February 2nd, 2007

First off I’d like to thank Dancho Danchev for the mention in his blog entry PR Storm. Part of me creating a blog was no thanks to reading his and reposting it to Linux Security. I also plan on commenting a little on some of the things he has to say. Now that I have started a new job, I have a little more time to do such things.

I originally started my blog to keep track of a lot of things that I do on a regular basis. Now I feel like I have the time to share my ideas and make them slightly more readable for others and possibly even usable. So the likely thing to do is to cover things that I know: Perl, System’s Administration, Privacy, and Security. We’ll see where it goes from there.

My goal is to publish something at least once every other day (or 3 – 4) per week. If I have more time, I will put more up. If anyone has any topics that they would like me to cover, please feel free to let me know. My email addresses is on my site (eric AT lubow dot org). Thanks, Enjoy.

10 More Tips Towards Securing Your Linux System

Wednesday, January 31st, 2007

Since everyone seemed to enjoy my first round of tips and tricks to securing a linux system, I figured I would throw together a few more. Enjoy.

  1. There are files that get changed very infrequently. For instance, if your system won’t have any users added anytime soon then it may be sensible to chattr immutably the /etc/password and /etc/shadow files. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
    1
       chattr +i /etc/passwd /etc/shadow
  2. Password protect your linux install with LILO. Edit your /etc/lilo.conf. At the end of each linux image that you want to secure, put the lines:
    1
    2
    3
       read-only
       restricted
       password = MySecurePassword

    Ensure you rereun /sbin/lilo so the changes take effect.

  3. Users who have sudoer (sudo) accounts setup can have the account setup to change to root without a password. To check this, as root use the following command:
    1
       grep NOPASSWD /etc/sudoers

    If there is an entry in the sudoers file, it will look like this:

    1
       eric    ALL=NOPASSWD:ALL

    To get rid of this, type visudo and remove the line in that file.

  4. Use sudo to execute commands as root as a replacement for su. In the /etc/sudoers file, add the following lines by using the visudo command:
    1
    2
       Cmnd_Alias LPCMDS = /usr/sbin/lpc, /usr/bin/lprm
       eric    ALL=LPCMDS

    Now the user ‘eric’ can sudo and use the lpc and lprm commands without having any other root level access.

  5. Turn off PasswordAuthentication and PermitEmptyPasswords in the SSH configuaration file /etc/ssh/sshd_config. This will ensure that users cannot set empty passwords or login without SSH keys.
    1
    2
       PermitEmptyPasswords no
       PasswordAuthentication no
  6. Instead of using “xhost +” to open up access to the X server, be more specific. Use the server name that you are allowing control to:
    1
       xhost +storm:0.0

    Once you are done using it, remember to disallow access to the X server from that host:

    1
       xhost -storm:0.0
  7. To find out the .Xauthority magic cookie looks like and to send it (authorization information) to the remote host, use the following command:
    1
       xauth extract - $DISPLAY | ssh storm xauth merge -

    Now the user who ran this command on the original host can now run xcilents on storm. xauth needs to be present on both hosts.

  8. To turn on spoof protection, run a simple bash script:
    1
       for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 1 > $i done;

    Be careful to remember that it drops packets more or less ‘invisibly’.

  9. A SYN-flood attack has the ability to bring the network aspect of your linux box to a snail like crawl. TCP_SYNCookies protection attempts to stop this from taking a heavy toll on the machine. To enable cp_syncookies
    protection, use the following command:

    1
       echo 1 > /proc/sys/net/ipv4/tcp_syncookies
  10. When possible use secure connection methods as opposed to insecure methods. Unless you are required to use telnet, substitute ssh (Secure SHell) in for rsh or telnet. Instead of POP3 or IMAP use SPOP3 or SIMAP (IMAPS). Both SIMAP and SPOP3 are just versions of IMAP and POP3 running over an SSL (Secure Socket Layer) tunnel.

10 Tips To Start Securing Your Linux System

Monday, January 29th, 2007

A while back I had been asked to write a few quick tips that as an administrator, one would find helpful. They published in one form or another and are now available here. There are MANY more, but these are just a few. Enjoy.

  1. Users who may be acting up or aren’t listening can still be controlled. Using a program called ‘skill’ (signal kill) which is part of the ‘procps’ package.
    1
    2
    3
    4
       Halt/Stop User eric: skill -STOP -u eric
       Continue User eric: skill -CONT -u eric
       Kill and Logout User eric: skill -KILL -u eric
       Kill and Logout All Users: skill -KILL -v /dev/pts/*
  2. Make use of security tools out there to test your server’s weaknesses. Nmap is an excellent port scanning tool to test to see what ports you have open. On a remote machine, type the command:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
       # nmap -sTU <server_ip>

       Starting nmap 3.70 ( http://www.insecure.org/nmap/ ) at 2006-08-10 13:51 EST
       Interesting ports on eric (172.16.0.1):
       (The 3131 ports scanned but not shown below are in state: closed)
       PORT    STATE         SERVICE
       22/tcp  open          ssh
       113/tcp open          auth

       Nmap run completed -- 1 IP address (1 host up) scanned in 221.669 seconds
  3. On a production server that is in a common area (although this should not be the case, some situations are inevidable). To avoid an accidental CTRL-ALT-DEL reboot of the machine, do the following to remove the necessary
    lines from the /etc/inittab file:

    1
       # sed -i 's/ca::ctrlaltdel:/#ca::ctrlaltdel:/g' /etc/inittab
  4. Two SSH configuration options that can be set to improve security should be checked on your production server. UsePrivilegeSeparation is an option, when enabled will allow the OpenSSH server to run a small (necessary) amount of code as root and the of the code in a chroot jail environment. StrictModes checks to ensure that your ssh files and directories have the proper permissions and ownerships before allowing an SSH session to open up. The
    directives should be set in the /etc/ssh/sshd_config as follows:

    1
    2
       UsePrivilegeSeparation yes
       StrcitModes yes
  5. The default umask (usermask) on most systems should be 022 to ensure that files are created with the permissions 0644 (-rw-r–r–). To change the default umask setting for a system, edit /etc/profile to ensure that you umask is appropriate for your setup.
  6. Some users like to have a passwordless account. To check this you need to look at the /etc/shadow account with the following command line:
    1
    awk -F: '$2 == "" { print $1, "has no password!" }' /etc/shadow
  7. Just in case someone else who has access to the superuser account decided to alter the password file and potentially make themselves a superuser. This is a method to check:
    1
       awk -F: '$3 == 0 { print $1, "is a superuser!" }' /etc/passwd
  8. Setuid and Setgid files have the potential to be very hazardous if they are accessilbe by the wrong users on the system. Therefore it is handy to be able to check with files fall into this category.
    1
       find /dir -xdev -type f -perm +ug=s -print
  9. World writable files can be left around by users wanting to make things easier for themselves. It is necessary to be careful about who can write to which files. To find all world writable files:
    1
       find /dir -xdev -perm +o=w ! \( -type d -perm +o=t \) ! -type l -print
  10. Some attackers, prior to attacking a host, (or users nmaping a host) will check to see if the host is alive. They do this by ‘ping’ing the host. In order to check if the host is up, they will use an ICMP echo request packet.
    To disallow these types of packets, use iptables:

    1
       iptables -A INPUT -p icmp --icmp-type echo-request -j DROP

Patching Procedure vs. Exploitation Potential

Thursday, January 25th, 2007

When you talk to many security experts, they pretty much agree that when a vulnerability hits, that it’s necessary that it be patched and that its only a matter of time until the sh*t hits the fan and some real knowledgable black hat has put something together for the script kiddies to play with. But a lot of people seem to forget every time a patch is required on a production system that there is due process that system administrators must go through. One of the primary steps is simply evaluation.

The primary questions that needs to be evaluated are:

What is the likelihood of the vulnerability being exploited or the damage that could be caused if it is exploited?

vs.

How long will it take to apply the patch, test it, implement it, then deploy it to the production environment? What kind of impact will that have on the production servers in terms of outages/downtime? Will it break anything else?

Let’s take some time to break these down. I have always found that the easiest way for most people to understand a problem is to use an example. I don’t want to single out phpBB, but since it recently came up and spurred a necessary conversation, I will use it for my example. The advisory that I am referencing is available here from Bugtraq.

At one of the many websites I run, I administer a phpBB forum. The forum is relatively low volume, but high volume enough to attract spammers which means its likely that it also attracts hackers (of the black hat variety). The phpBB version is 2.0.21. For a few reasons, we have not only modified some of the source code of phpBB, but we have also added plugins. For anyone who has any experience adding plugins into phpBB, you know that its akin to chewing glass (to say the least). Even though we version track in CVS, it would still be somewhat of a PITA to update to 2.0.22. The process would be something along the lines of:

Import the new version into the old version with the changes into CVS. See if it makes sense to resolve the conflicts. If so, resolve the conflicts and begin testing. If not, figure out how to duplicate the changes in the previous version (2.0.21) in the new version (2.0.22). Once that’s been done, then add the plugins that were installed in the old version into the new version. Come up with a transition plan for the production server. Back up the data and do a few test runs of the transition on the development box. Then schedule the outage time and do the turnover to the new server. Then pray everything goes ok for the transition. Simple, No?

The point of going through that lengthy explanation was to demonstrate that the upgrade process may not be as simple (in a lot of cases) as:

1
apt-get update && apt-get upgrade

The exploit itself requires a user to create a shockwave flash file with certain parameters, then put it into a specific web page with certain parameters, and then it must be private messaged (emailed) to someone who is already signed into the board (has an active cookie).

Many security experts would tell you that, “It’s a vulnerability, it needs to be patched immediately.” Well, let’s do that evaluation thing I was referring to earlier. How likely is it that someone is going to take the time to create that flash file. And even if someone does go to that trouble, what’s to say that if a user (or the admin) receives the message in an email, that they are going to visit the site and watch the video?

My colleague was asserting that it’s out there on the internet and needs to be protected. And to that extent, I certainly agree. However, the amount of time that it would take to make all those changes, test them, and deploy the changes to the production server far outweighs the possibility of the application being exploited.

When I first started out in security, I took the approach, “It’s a vulnerability…Security at all costs.” Now I have learned that sometimes one needs to balance out time vs. need vs. priority. So I encourage System Administrators to think before jumping into situations like that. Think of how much more work could be accomplished in the time that would have been spent trying to patch something that probably wouldn’t have been exploited to begin with.

Configuring mod_security for EnGarde Secure Linux

Wednesday, January 24th, 2007

Introduction

This document is intended to guide a user through initially setting up and understanding a mod_security+Apache2 under EnGarde Secure Linux setup. Once you have completed reading this document, you should be able to understand the basics of mod_security, what it is used for, and why it may apply to you and your environment.

Why mod_security

The need for mod_security may not be initially apparent since we are all perfect programmers and rarely make a mistake that could prove hazardous to security. It may not be for you, but it is for the users of your servers who may not be as adept in creating web applications.

mod_security is a web application intrusion detection and prevention engine. It operates by ‘hook’ing itself into apache and inspecting all requests for your specific ruleset. It can be used to monitor your server with logging or even protect it by ”deny”ing attacks.

Skills Needed

You will need to have access to the WebTool and the GDSN Package Manager. You need to have shell access to the machine and the ability to use a text editor to make the necessary changes to the configuration files.

Installation

To install mod_security, go into the GDSN Manager in the Guardian Digital WebTool.

1
2
 System -> Guardian Digital Secure Network
 Module -> Package Management

Find the line that says libapache-mod_security and check the checkbox next to it. Click the Install Selected Packages button. Let the mod_security package install.

Configuration

Now its time to configure the mod_security package. The first thing that has to be done is to add the configuration file for mod_security (that we are going to create) to the apache2 configuration file. To accomplish this, ensure that the following line is somewhere in your /etc/httpd/conf/httpd.conf:

1
 Include conf/mod_security.conf

This ensures that when apache2 starts up, the configuration that you spcify in /etc/httpd/conf/httpd.conf will be loaded.

Basic Configuration

Once you have installed mod_security, it’s time for some basic configuration. In order to keep consistency, the mod_security.conf configuration file should be created in the /etc/httpd/conf/ directory. For a basic configuration (which we will walk through step-by-step), your /etc/httpd/conf/mod_security.conf file should looks as follows:

 LoadModule security_module /usr/libexec/apache/mod_security.so
 <IfModule mod_security.c>
   SecFilterEngine On
   SecFilterDefaultAction "log"
   SecFilterCheckURLEncoding On
   SecFilterForceByteRange 1 255

   SecServerSignature "Microsoft-IIS/5.0"

   SecAuditEngine RelevantOnly
   SecAuditLog /etc/httpd/logs/modsec_audit_log
   SecFilterDebugLog /etc/httpd/logs/modsec_debug_log
   SecFilterDebugLevel 0
 </IfModule>
SecFilterEngine

This directive turns on mod_security.

SecFilterDefaultAction

This directive decides what happens to a request that is caught by the mod_security filtering engine. In our case, we are going to log the request. By reading the documentation, you will find that there are many other options
available. By changing this line slightly (once you have logged and found out when and how the mod_security engine catches requests), you can deny requests and produce errors:

1
 SecFilterDefaultAction "deny,log,status:404"

This line denies the request, logs it to your log files, and send the requester back a HTTP status code 404 (also known as Page Not Found).

SecFilterCheckURLEncoding

This directive checks the URL to ensure that all characters in the URL are properly encoded.

SecFilterForceByteRange

This directive asserts which bytes are allowed in requests. The 1…255 specified in the example allows almost all characters. To bring this down to just the minimal ASCII character set, replace the above line with:

1
 SecFilterForceByteRange 32 126
SecServerSignature

This directive can be used to attempt to mask the identity of the apache server. Although this method works well, it is not 100% effective as there are other methods that can be used to determine the server type and version. It should be noted that for this to work, the Apache2 configuration variable ServerTokens should be changed from Prod (default) to Full so the line reads as follows:

1
 ServerTokens Full
SecAuditEngine

This directive allows more information about the methods of an attacker to be logged to the specified logfile. To turn this on to log every request object use the syntax:

1
 SecAuditEngine On

This is not very desirable as this produces a LOT of output. The more desirable version is the one used above:

1
 SecAuditEngine RelevantOnly

This logs only the interesting stuff that may be useful in back tracing the methods of an attacker.

SecAuditLog

This is the location of the audit log file. It is generally preferred to use absolute paths to files to ensure the correct path is being used.

SecFilterDebugLevel

This directive refers to the debug level logged to the specified logfile. The current value of 0 should be used on production systems. While the environment is in testing, a level of 1..4 should be used with increasing
verbosity between from 1 up to 4.

SecFilterDebugLog

This is the location of the audit log file. It is generally preferred to use absolute paths to files to ensure the correct path is being used.

We will add some lines to do some Selective filtering. Selective filters are used to handle some specific situations that cannot be targeted with site-wide policy. However you need to be careful of what you make site-wide policy since some of these security measures can break your current setup.

There are even more in depth uses where you can number rules and apply them to certain sets of directives and not to others. mod_security allows for very granular control. The in depth discussions on using these is beyond the scope of this document.

General

Since mod_security is a keyword driven engine, it will take the specified action on simple keyword matches. This is to say that anything that follows the directive SecFilter will engage the appropriate action. For example:

1
 SecFilter "&gt;applet"

If the <applet> tag appears anywhere in the request, then the log action specified above is taken.

XSS Attacks

To try to prevent some types of cross site scripting attacks, you can add the following lines to your configuration file:

1
2
 SecFilter "&lt;script"
 SecFilter "&lt;.+&gt;"

This tries to prevent Javascript injections or HTML injections.

Directory Traversal

Rarely will it be necessary for a user to traverse directories using the “../” construct. In order to prevent that, we can add the line:

1
 SecFilter "\.\./"
GET/HEAD Requests

With the use of these lines, we will not accept GET or HEAD requests that have bodies:

1
2
 SecFilterSelective REQUEST_METHOD "^(GET|HEAD)$" chain
 SecFilterSelective HTTP_Content-Length "!^$"
Unknown Requests

There are occasionally requests that come across (usually malicious) that we don’t know how to handle. At that point we let mod_security handle the request by adding the following lines:

1
 SecFilterSelective HTTP_Transfer-Encoding "!^$"

Conclusion

At this point, you should be capable of setting up a basic installation of mod_security. They are many more combinations of both simple and advanced techniques and directives that can be used to protect your server. By reading the documentation, you can have very granular control over your web server attack detection and prevention.

Originally Posted:

References

Mail::IMAPClient

Tuesday, January 23rd, 2007

Description: Recently, I have had the pleasure of getting knee deep into various aspects of Email. One of the things that I consistantly found myself wanting to do was to parse through it. I know the best way to do this is to connect to the IMAP server and download the messages. The best way I have come accross on how to accomplish this task is using Mail::IMAPClient.

CPAN: Mail::IMAPClient

Example 1:
Creating a connection to an IMAP server isn’t complicated. And once you are connected there are many things that you can do to manipulate messages. In this first example, I am merely going to show you how to connect to the mail server and download ALL the messages in specific folders:

# Always be safe
use strict;
use warnings;

# Use the module
use Mail::IMAPClient;

 $imap = Mail::IMAPClient->new( Server  => 'mail.server.com:143',
                                User    => 'me',
                              Password  => 'mypass')
        # module uses eval, so we use $@ instead of $!
        or die "IMAP Failure: $@";

 foreach my $box qw( HAM SPAM ) {
   # Which file are the messages going into
   my $file = "mail/$box";

   # Select the mailbox to get messages from
   $imap->select($box)
        or die "IMAP Select Error: $!";

   # Store each message as an array element
   my @msgs = $imap->search('ALL')
        or die "Couldn't get all messages\n";

   # Loop over the messages and store in file
   foreach my $msg (@msgs) {
     # Pipe msgs through 'formail' so they are stored properly
     open my $pipe, "| formail >> $file"
       or die("Formail Open Pipe Error: $!");

     # Send msg through file pipe
     $imap->message_to_file($pipe, $msg);

     # Close the messgae pipe
     close $pipe
       or die("Formail Close Pipe Error: $!");
   }

   # Close the folder
   $imap->close($box);
 }

 # We're all done with IMAP here
 $imap->logout();

Example 2:
In this next example, we are going to take advantage of some other methods that are provided by this useful little module. We will begin by using the same base as is in Example 1, but we will add some nuances in the middle for functionality.

Note: Messages don’t get immediately deleted with IMAP, only marked for deletion. They aren’t actually deleted until the box is expunged. In this case, it gets done after the looping over each mailbox is complete. This is to say that if the program gets interrupted in the middle that the messages won’t be deleted until the mailbox is officially issued an expunge command.

# Always be safe
use strict;
use warnings;

# Use the module
use Mail::IMAPClient;

 $imap = Mail::IMAPClient->new( Server  => 'mail.server.com:143',
                                User    => 'me',
                              Password  => 'mypass')
        # module uses eval, so we use $@ instead of $!
        or die "IMAP Failure: $@";

 foreach my $box qw( HAM SPAM ) {
   # How many msgs are we going to process
   print "There are ". $imap->message_count($box).
          " messages in the $box folder.\n";

   # Which file are the messages going into
   my $file = "mail/$box";

   # Select the mailbox to get messages from
   $imap->select($box)
        or die "IMAP Select Error: $!";

   # Store each message as an array element
   my @msgs = $imap->search('ALL')
        or die "Couldn't get all messages\n";

   # Loop over the messages and store in file
   foreach my $msg (@msgs) {
     # Pipe msgs through 'formail' so they are stored properly
     open my $pipe, "| formail >> $file"
       or die("Formail Open Pipe Error: $!");

     # Skip the msg if its over 100k
     if ($imap->size($msg) > 100000) {
       $imap->delete_message($msg);
       next;
     }

     # Send msg through file pipe
     $imap->message_to_file($pipe, $msg);

     # Close the messgae pipe
     close $pipe
       or die("Formail Close Pipe Error: $!");

     # Delete each message after downloading
     $imap->delete_message($msg);

     # If we are just testing and want to leave
     #  leave the messages untouched, then we
     #  can use the following line of code
     # $imap->deny_seeing($msg);
   }

   # Expunge and close the folder
   $imap->expunge($box);
   $imap->close($box);
 }

 # We're all done with IMAP here
 $imap->logout();

Sys::Hostname

Friday, January 19th, 2007

Description: Sys::Hostname is a relatively small, but very useful module. Just as the module name describes, it gets your system’s hostname. To paraphrase the module’s POD documentation, it will try every conceivable way to get the hostname of the current machine.

CPAN: Sys::Hostname

Example:
The one and only use for this module.

# Always be safe
use strict;
use warnings;

# Use the module
use Sys::Hostname;

# Get the hostname
my $host = Sys::Hostname::hostname();

# The above line can also be written as
#my $host = hostname;

print "You are on: $host\\n";

HTML::Entities

Thursday, January 18th, 2007

Description: When taking user input through any number of forms, there could be characters that you aren’t expecting. This is exactly what HTML::Entities was designed to handle. When getting the user input, it converts it into a form that can help in mitigating certain types of web based scripting attacks.

CPAN: HTML::Entities

Example 1:
The general example of using

1
encode_entities()

is probably also the most common. It basically says to encode everything in the string that its possible to encode.

# Always be safe and smart
use strict;
use warnings;

# Use the module
use HTML::Entities;

 my $html = "bad stuff here&#$%";
 $html = encode_entities($html);
 print "HTML: $html\\n";

__OUTPUT__
HTML: bad stuff here&#0

Example 2:
This is the slightly more specific example as it uses only specific sets of characters as the “unsafe” characters.

# Always be safe and smart
use strict;
use warnings;

# Use the module
use HTML::Entities;

 my $html = "bad stuff here&#$%";
 $html = encode_entities($html, "\\x80-\\xff");
 print "HTML: $html\\n";

__OUTPUT__
HTML: bad stuff here&amp;amp;#0

Example 3:
This is an example of

1
decode_entities()

which does the reverse. It checks the string to see if there are any HTML encoded characters and decodes them into their Unicode equivalent. This is the general version of

1
decode_entities()

which is similar to the version of

1
encode_entities()

demonstrated in Example 1.

# Always be safe and smart
use strict;
use warnings;

# Use the module
use HTML::Entities;

 my $html = "encoded: bad stuff here&amp;#0";
 $html = decode_entities($html);
 print "Unicode: $html\\n";

__OUTPUT__
Unicode: encoded: bad stuff here&#0

Mail::Sender

Wednesday, January 17th, 2007

Description: This is probably one of the modules that I use most frequently. I commonly write reporting and statistic generating scripts. When the data is finished being crunched, I then dump it into a scalar and send it off in an email. This is the module that does my dirty work for me.

CPAN: Mail::Sender

Example 1:
This is probably the most common implementation of Mail::Sender that I use. I generally build up what is going to go in the $EMAIL variable throughout the script and then eval it to ensure it gets put into the message.

# Always be safe
use strict;
use warnings;

# Use the module
use Mail::Sender;

# Remove Mail::Sender logo feces from mail header
$Mail::Sender::NO_X_MAILER = 1;

my $EMAIL = "This is the body of the email here.\\n";

  eval {
   # Create the anonymous Mail::Sender object
   (new Mail::Sender)->MailMsg(
   {
     smtp => "mail.mydomain.com",
     from => "\\"Postmaster\\" <postmaster\\@mydomain.com>",
     to   => "\\"Eric Lubow\\" <me\\@mydomain.com.com>",
     cc   => "\\"Tech Support\\" <tech\\@mydomain.com>",
     subject => "Test Email",
     msg  => "$EMAIL"
   })
     # Show the Mail::Sender error since $! isn't set here
     or die "Error Sending Mail: $Mail::Sender::Error";
  };

Example 2:
This example sends a mixed multipart message, with text in HTML and alternative text in plaintext. If the receiving user’s email client can read HTML messages, then the HTML portion of the message will be shown. Otherwise, the plaintext (alternative) will show. The iterative loop sends the email message to as many users as are in the array. The script just provides a qw (quote words) statement to be used in the loop. Your best bet would be to create an array of email addresses and iterate over the array:

 my @emails = ["eric@mydomain.com","techs@mydomain.com"];
 for my $rcpt (@emails) {
   ...
 }

Note: In this example, I have also included a subroutine to create a message boundary. I don’t particularly have anything major against the one’s that are generated by default, I just think they give away a little too much information for my liking. Mine is slightly more random.

# Always be safe
use strict;
use warnings;

# Use the module
use Mail::Sender;

# Remove Mail::Sender logo feces from mail header
$Mail::Sender::NO_X_MAILER = 1;

 for my $rcpt (qw( me@mydomain.com tech@mydomain.com ) ) {
   my $sender = new Mail::Sender
    { smtp => '127.0.0.1',
       from => "$from_name <postmaster\\@mydomain.com>",
       boundary => make_boundary()
    }
    or die "Error in mailing: $Mail::Sender::Error\\n";

   # Begin the message, tells the header what message type
   $sender->OpenMultipart(
     { to        => "eric\\@mydomain.com",
        subject   => "Test Email",
        headers   => "Return-Path: postmaster\\@mydomain.com\\r\\n",
        multipart => "mixed",
     })
     or die "Can't Open message: $sender->{'error_msg'}\n";

     # Change the content-type
     $sender->Part( { ctype => 'multipart/alternative' });

     # Put the plain text into the email (aka the alternative text)
     $sender->Part(
       { ctype       => 'text/plain',
         disposition => 'NONE',
         msg          => "This is a test message.\n\nHere is the plaintext.\\n"
       });

     # Put the HTML text into the HTML (what SHOULD be displayed)
     $sender->Part(
       { ctype       => 'text/html',
         disposition => 'NONE',
         msg         => "This is a <B>TEST</B> message.\\n<br>\\n"
       });

     # The message isn't sent until it's Close()'d
     $sender->Close()
       or die "Failed to send message: $sender->{'error_msg'}\\n";
 }

#######################################################
# Function: Creates a random message boundary
# Parameters: None
# Return: Boundary
# Notes: Bounaries are 43 characters in length
#######################################################
sub make_boundary {
  my ($boundary, $char, $wanted) = ('','',43);

  open R, '<', "/dev/urandom";
  while ($wanted) {
    read (R, $char, 1)
      or die "Couldn't read from /dev/urandom: $!\n";
    if ($char =~ /[A-Za-z0-9]/) {
      $boundary .= $char;
      --$wanted;
    }
  }
  close(R);

  return $boundary;
}

IO::Socket::INET

Tuesday, January 16th, 2007

Description: When I need to interact with the raw IO of telnetting to a port or creating a hand rolled implementation of my own service, I use IO::Socket. My most recent endeavor was for a need to check weather or not services were running on a regular basis. I will show some excepts from the code later on in this post.

Although IO::Socket::INET is just a frontend for a good number of Perl’s builtin functions, I find it handy as it is generally a little more sane.

CPAN: IO::Socket::INET

Note: IO::Socket::INET is a subclass of IO::Socket.

Example 1:
This example is the socket portion of a script that checks to see if certain daemons are alive. It does this by connecting to the port that they should be listening on. If it receives a response, then the daemon is alive, if not, it restarts the daemon and tries contacting it again. As you can see below, the response itself doesn’t matter, just that there is a response being received.

# Always be safe and smart
use strict;
use warnings;

# Use the module
use IO::Socket;

 # Prototype the socket
 my $socket = new IO::Socket::INET(
                   PeerAddr => "localhost",   # Hostname/IP
                   PeerPort => "smtp(25)",   # Service (Port)
                   Proto    => "tcp",            # Protocol
                   )
   or die "Socket bind error: $!";

 # Create the socket
 $socket
   or die "Socket error: $!";

 # Close the socket
 close $socket
   or die "Socket close error: $!";

Example 2:
On the other side of the fence, we can create a listening socket that can act as a server. If you run this little script and then connect to the specified port it will say something along the following:

1
Welcome 127.0.0.1 to a simple socket server.

The server script will listen indefinitely (or at least until you press CTRL-C to make it exit). To test out the script and ensure you receive something similar to the above response (with your IP instead of 127.0.0.1), from the linux command prompt, type the following (replace with your IP and port):

1
$ telnet localhost 12345
# Always be safe and smart
use strict;
use warnings;

# Use the module
use IO::Socket;

 # Listen port
 my $port = 12345;

 # Prototype the socket
 my $server = new IO::Socket::INET(
   Listen   => SOMAXCONN, # Max connections allowed by system
   Reuse    => 1,               # Reuse connections
   LocalAddr => "localhost", # Listen hostname
   LocalPort => $port,        # Listen port
   Proto    => "tcp",           # Listen protocol
 ) or die "Socket bind error: $!";

 # Listen for the client until we get a CTRL-C
 while (my $client = $server->accept()) {
   # Force the data out of the stack
   $client->autoflush(1);

   # Say hello to my little client
   print $client "Welcome ". $client->peerhost ." ".
                 "to a simple socket server.\n";
 }

 # Close the server when we are done
 close $server
   or die "Socket close error: $!";