Creating a Process Table hash in Perl

I came across a situation where I needed to access the process table in Perl. The problem that i found was that the best accessor Proc::ProcessTable only retrieved an array. Since it seems fairly senseless to keep looping over an array to find the exact process id that I want, you may want to turn it into a hash.

use strict;
use warnings;
use Proc::ProcessTable;

 # Create a new process table object
 my ($pt) = new Proc::ProcessTable;

 # Initialize your process table hash
 my (%pt_hash);

 # Get the fields that your architecture supports
 my (@fields) = $pt->fields;

 # Outer loop for each process id
 foreach my $proc ( @{$pt->table} ) {
    # Inner loop for each field within the process id
    for my $field (@fields) {
       # Add the field to the hash
       $pt_hash{$proc->pid}{$field} = $proc->$field();
    }
 }

It’s just as simple as that. If you want to be sure that its in there. At the end of the file add these two lines for proof:

use Data::Dumper;
print Dumper \%pt_hash;

The hash is organized with the keys being the process ids. There is another hash underneath it with all the fields as hash keys.

Posted in Perl. Tags: , . 1 Comment »

Deleting Lots Of Files (SysAdmin problem solving exercise)

Since I know I am not the first (or the last) to make a typo in logrotate and not catch it for a while…someone else must have been in the position of having to delete a lot of files in the same manner. I recently learned that, as usual, there is more than one way to handle it.

To put the situation in context, I basically allowed thousands of mail.* files to be created. These files littered the /var/log/ directory and basically slowed down the entire file system access. I figured out this a number of ways.

The first way was when I tried to do an ls anywhere, it would just hang. My first reaction was to check to see what was eating up the CPU. To do this, I did a top. I noticed that logrotate was hogging all the CPU cycles. Since I know that logrotate basically only operates on one parent directory (by default) /var/log, I headed on over there and did an ls. Once again, it just hung. Then I figured the file system was slow and decided to check out some file system information. The next two commands I ran were df -h and df -i. I ran the df -h to see if we were out of disk space (and yes I lazily use human readable format). I ran the second to check to see how many inodes were in use. (For more information on inodes, check out the wikipedia entry here).

Now that I know the system is short on inodes, I checked out the output of lsof. Now I know that we have some serious problems in the /var/log dir. After some quick investigation, I realized that there were too many mail.* files. How do I get rid of them? Glad you asked… Let’s assume that we want to delete ALL the mail.* files in the /var/log directory.

1) The easiest way is to do it with find:
1a) Using find‘s delete command:

[root@eric] /var/log # find ./ -type f -name "mail.*" -delete

or
1b) using find‘s exec command with rm:

[root@eric] /var/log # find ./ -type f -name "mail.*" -exec rm -rf '{}' \;

These will work, but either will be slow since they doesn’t do batch execution.

2) A slightly more preferred way is to use bash:

[root@eric] /var/log # for n in mail.*; do rm -v $n; done;

This is a little faster, but will still be relatively slow since there is no batch execution. (Note: The -v in the rm will cause quite a bit of output since it is showing you EVERY file it deletes. Feel free to leave this out if you really screwed up.)

3) The actual preferred method is to use find:

[root@eric] /var/log # find ./ -type f -name "mail.*" | xargs rm -f

I believe this is the preferred method because although it removes the files one at a time, it is more efficient for the file system since it batches it up.

There are certainly other ways to accomplish this task. It can always be done with a Perl one-liner or even using some Perl modules to save some time. These are just a few ideas to point someone in the right direction.

MySQL Proxy Query Profiling

Since I am now finally getting to play with MySQL Proxy, I am going to outline some recipes here that I have found/created/modified that may be useful to someone other than me. This is a recipe for profiling queries. It writes the information to the PROXY_LOG_FILE currently name mysql.log. It is a file that will be created in the directory that you run mysql-proxy from. The file itself is mostly commented and should therefore be pretty self-explanatory. It was adapted from the reference documentation listed at the bottom of this entry.

assert(proxy.PROXY_VERSION >= 0x00600,
  "You need to be running mysql-proxy 0.6.0 or greater")

 -- Set up the log file
 local log_file = os.getenv("PROXY_LOG_FILE")
 if (log_file == nil) then
    log_file = "mysql.log"
 end

 -- Open up our log file in append mode
 local fh = io.open(log_file, "a+")

 -- Set some variables
 local original_query = ""
 local executed_query = ""
 local replace = false
 local comment = ""

 -- For profilign
 local profile = 0

-- Push the query onto the Queue
function read_query( packet )
  if string.byte(packet) == proxy.COM_QUERY then
    query = string.sub(packet, 2)

    -- Pull out the comment and store it
    original_query = string.gsub(query, "%s*%*%*%*%s(.+)%s%*%*%*%s*",'')
    comment = string.match(query, "%s*%*%*%*%s(.+)%s%*%*%*%s*")

    -- Add the original packet to the query if we have a comment
    if (comment) then
        if string.match(string.upper(comment), '%s*PROFILE') then
          -- Profile types:
          --   ALL, BLOCK IO, CONTEXT SWITCHES, IPC,
          --        MEMORY, PAGE FAULTS, SOURCE, SWAPS

          -- Set it up for profiling
          -- Generate a new COM_QUERY packet
          --   and inject it with a new id (11)
          original_query = "SET PROFILING=1; " .. original_query
      end -- End if (PROFILE)
    end -- End if (comment)

    executed_query = original_query
    -- Check for a 'SELECT' typo
    if string.match(string.upper(original_query), '^%s*SLECT') then
        executed_query = string.gsub(original_query,'^%s*%w+', 'SELECT')
        replace = true;
    -- matches "CD" as first word of the query
    elseif string.match(string.upper(original_query), '^%s*CD') then
        executed_query = string.gsub(original_query,'^%s*%w+', 'USE')
        replace = true
    end

    -- Postpend the other profiling strings to the query
    if (comment and string.match(string.upper(comment), '%s*PROFILE')) then
        executed_query = executed_query .. "; SHOW " .. comment
    end

    -- Actually execute the query here
    proxy.queries:append(1, string.char(proxy.COM_QUERY) .. executed_query )
    return proxy.PROXY_SEND_QUERY
  else
    executed_query = ""
  end
end

function read_query_result (inj)
  local res = assert(inj.resultset)
  local num_cols = string.byte(res.raw, 1)
  if num_cols > 0 and num_cols < 255 then
     for row in inj.resultset.rows do
         row_count = row_count + 1
     end
  end
  
  -- Prepend the error tag in the log
  local error_status = "" 
  if res.query_status and (res.query_status < 0) then
     error_status = "[ERROR]"
  end
  
  -- Gets the rows affected by the actual query
  local row_count = 0
  if (res.affected_rows) then
     row_count = res.affected_rows
  end
  
  -- Prepend the comment line in the log
  if (comment) then
     fh:write( string.format("%s %6d -- [COMMENT] %s\n",
        os.date('%Y-%m-%d %H:%M:%S'), 
        proxy.connection.server["thread_id"],
        comment))
  end
  
  -- Prepend the typo in the log
  if (replace) then
     fh:write( string.format("%s %6d -- [REPLACEMENT] %s\n\t\t\t%s\n",
        os.date('%Y-%m-%d %H:%M:%S'), 
        proxy.connection.server["thread_id"],
        ("replaced " .. original_query),
        ("with " .. executed_query)))
  end
  
  -- Write the query adding the number of rows retrieved and query time
  fh:write( string.format("%s %6d -- %s %s {%d rows} {%d ms}\n",
     os.date('%Y-%m-%d %H:%M:%S'), 
     proxy.connection.server["thread_id"],
     error_status,
     executed_query,
     row_count,
     inj.query_time)
  )
  fh:flush()
end

To make this work, simply append 3 asterisks 'PROFILE <profile_type>' and then 3 more asterisks and you will have the profile information returned to you in your query:

*** PROFILE ALL *** SELECT * FROM foo_bar;

2 tables will be returned. Your results and then the profile of your results.

Reference: Writing LUA Scripts for MySQL Proxy

Posted in Lua. Tags: , . 4 Comments »

Text Messages to Cell Phones via Email

I have been compiling a list of the domains that one needs in order to send text messages to cell phones via email. As a huge user of Nagios, this is how I keep myself aware of the status changes. Below I have listed the carriers that I use most frequently. If you have any others to list here to make this more complete, please add a comment and I will add it to the list.

I have seen other instances of this before, but some are outdated. These are the newest ones that I have come across.

The assumption here is that the telephone number of the person that you are trying to text message is 2225551212. Just make sure that there is nothing in between the numbers (like a ‘.’ or a ‘-‘), Also make sure that you don’t put the ‘1’ before the phone number.

  • ATT: 2225551212@txt.att.net
  • Verizon: 2225551212@vtext.com
  • T-Mobile: 2225551212@tmomail.net
  • Alltell: 2225551212@message.alltel.com
  • Virgin Mobile:2225551212@vmobl.com
  • Sprint: 2225551212@messaging.sprintpcs.com
  • Nextel: 2225551212@messaging.nextel.com
  • All Other: 2225551212@teleflip.com

It should be noted that the last item (Teleflip), can be used either in place of any of these or as a fall through. It seems to act as a universal text message system.

UPDATES:
Here are the contributed addresses. The thanks are in parentheses following the numbers:

  • Claro (Brazil): 2225551212@clarotorpedo.com.br (Rodrigo)
Posted in Misc. Tags: . 2 Comments »