Archive for the ‘ Musings ’ Category

Google Securing The Web One Discrete Monopolizing Push At A Time

Friday, November 4th, 2011

Contrary to speculation by some, Google’s decision for encrypting search data is motivated by the goal to make the web as a whole more secure and it’s not driven by economic interests. I think Google is silently forcing the internet to do what they should be doing on their own.
(more…)

What Does Web 2.0 Mean To You?

Wednesday, December 23rd, 2009

I have been doing a lot of reading and a lot of thinking and trying to decide what exactly Web 2.0 means. What massive advancement in an emerging technology called the internet advocates an increment in major version number?

Some people say its the looks. The new feel of the internet with crazy CSS and rounded corners and a lighter more airy feeling. I don’t think that’s it.

Some people say that its the AJAX layer that has been added to the internet. This refers to the layer of interactivity a page web page can give you. I don’t think it’s this either.
(more…)

AT&T – Reactive vs. Proactive

Thursday, December 10th, 2009

As much as I hate to steal a title or a good joke, I want to title this post iPhone Outage? There’s An App For That. Why? Because it’s funny.

So why am I talking about reactive vs. proactive? In case you haven’t seen it yet, AT&T recently came out with an app called AT&T Mark The Spot. The idea behind the app is that if you have a dropped call or bad reception, that you open the app, click your problem and it will mark the spot by sending the information to AT&T. I am still not entirely sure how this app works in an area where there is NO reception, how does it know where you are to tell AT&T?
(more…)

Price of Commercials

Wednesday, October 28th, 2009

The price of commercials is especially high for engineers. And by commercials, I don’t mean an intermission between pieces of a sitcom or drama, I mean the brief 15 seconds of an interruption when someone asks an engineer in the zone a question that takes 3 seconds to answer. For the sake of argument, let’s say an engineer gets interrupted a mere 5 times per day including lunch and a daily meeting (let’s call it a scrum for fun).

If it takes that engineer, admin, developer or whatever 10 minutes to get focused after each interruption and the initial getting into the office and getting into the swing of things; that means that out of an 8 hour day, 1 hour is wasted just refocusing. Refocusing just puts you back on the issue, it doesn’t put you back in the zone. Some engineers only get in the zone once per day. At that rate, you can massively waste someone’s productivity with a 10 second interruption.

What’s my point? Good question. That commercial/question/interruption that someone is pushing onto that engineer could be the straw that broke the camel’s back on a deadline. So be aware of the situation that your people are in, who is talking to them, who has access to them, and who takes advantage of that access. Those precious periods of concentration can afford you a huge win or bring about a big loss.

Causing More Problems Than You Solve

Wednesday, October 7th, 2009

To start off, if you know me personally, then you know I recently (July 30, 2009) broke my leg skydiving. If you’re interested, you can see this video on Youtube here. To make a long story short, I had surgery that night, they put a titanium rod in my thigh and I have been on crutches since. I have only recently started learning to walk again (which I have no yet reached that point). This week my insurance decided that it was no longer necessary to send me to Physical Therapy (thanks Oxford).

Like any corporation, Oxford is in the business of making money and in this case, they are doing so by deciding not to pay for my PT. In the long run, the lack of rehabilitation will likely leave me in a weakened state and generally more prone to injury once I go back to my skydiving, motorcycle riding, MMA, and BASE jumping ways. If Oxford had said, let’s make sure he can walk and then we’ll cut him off, at least he’ll have a foundation and be less prone to injury; then they might be saving a bit of money on me in the long run.

So what does this sob story have to do with IT? A decision made now in order to save money can end up costing you more of time and money in the long run. And since time is money, sometimes a little bit of planning can go a long way. Should you add the feature now because your biggest client wants it by Friday. Well if you do that, then you might lose a few smaller clients along the way and the word of mouth may be more damaging than temporarily upsetting that large client.

Perhaps you set up Nagios and immediately turned on alerting without learning the thresholds that your machines typically sit at. Then you get a whole set of alerts and you spend more time trying to sort through the real problem ones versus the ones that just have a slightly abnormal operating level then you would if you just looked at your machines thresholds to begin with.

There are a million examples that could be listed here. The point is, before jumping into a decision, try to make sure that you’re not going to be paying for it in the long run. A little planning can go a long way.

Busiest Person You Know

Friday, October 2nd, 2009

The old adage, “If you want something done, give it to the busiest person you know” is probably one of the truest messages you can pass to a technologist. The first thing I want to point out is there is a difference between busy and always doing something. Just because someone is doing something, doesn’t mean they are busy. If they are sleeping, they aren’t busy. But if you know someone who is constantly working on side projects (contributing to their own blog (more regularly than I do), building a web site, working on open source), or they have many hobbies, that is busy. If you ask them to do something, you can guarantee that they will find a way to get it done.

You’re probably wondering why I putting this in a blog where I primarily spend time writing about technology and the things I figure out therein. Well, it is generally applicable because I come up with the most time saving, interesting, and generally reusable solutions to a issue when I am the busiest with other things just trying to get it done.

Recently Joel Spolsky wrote about being a Duct Tape Programmer. And many of the solutions I am referring to here are duct tape style solutions (also known as the ones that stick). It’s usually the quick and dirty solutions that last the longest because they are the simplest and yet somehow most effective (and no, I am not only talking about programming). I’m talking about getting things done. So be the busiest person you know sometimes and just get it done. The solution will probably be better and more effective than you think while you’re doing it.

Bing! Hunch! Decision Engine!

Wednesday, June 3rd, 2009

I know, the title is an awful play on Batman from the 60s, but I thought it was funny, so tough. Anyway, Bing as most of you know is Microsoft‘s attempt to fix search (if you think search is actually broken, but that’s a whole other post).

Bing (which for those of you who don’t know is: Bing Is Not Google) is touting itself as a decision engine. If I understand what a decision engine does correctly, it helps you take a bunch of variables related to the outcome and depending on your feelings about those variables, helps you to get to the end state (a decision).

(overly simplified) example: Should I live in New York City?
Variables: Noise, Transportation, Money
Q 1: Do you mind a lot of noise at night?
A 1: Yes, New York City is fine.
Q 2: Do you like driving everywhere?
A 2: Yes I like driving. New York City is better for people who like mass transportation. Parking and timeliness of movement can be a problem.
Q 3: Do you have the money to live in New York City?
A 3: New York City is one of the most expensive cities in the world to live in. No, I don’t make enough to live in New York City.
Outcome: 2 of 3 answers are contrary to living in New York City. Therefore you should probably not live in New York City.

I don’t see how Bing does this for you.

Enter Hunch. Hunch is an actual decision engine (or a much closer version than Bing). Just to give you an idea of how Hunch works, I decided to ask it whether I should get a netbook or a laptop (even though I know full I need a laptop and I love my MacBook Pro). This is what Hunch did for me.

Using multiple choice questions for everything, Hunch asked me about my usage plans. I told Hunch that I need it for photos, videos, music, etc. Then Hunch asked me how much I would be willing to pay. I need power and I know that comes at a cost, so I told Hunch greater than $1200. Hunch asked me about my travel habits and I said I travel a lot, but I still need power. It asked me about an OS (which I of course said Mac). It asked me about my keyboard size preference, which I prefer a larger keyboard. Finally it asked me whether this would be my primary computer. I said yes. It came up with the suggestion that there is an 85% chance that I should get a MacBook Pro based on my needs. Sounds good to me :). Hunch will even tell you why it came to that conclusion (based on your answers).

As an aside (since I am an email administrator by day), I found it interesting that Hunch, when sending their welcome email, sends a vcard to ensure that their email address is properly added to your contacts. And it is located just a few blocks from my office. Small world.

I am really excited for this product to go fully live. I think it is an absolutely outstanding engine and once live will be a great asset to the web (no I am not being paid to say that). I just think that its about time stuff like this happened. Now if they can just get the experts involved for people who want more advanced information…

Why Idea People Should Twitter

Tuesday, May 26th, 2009

Let me start off by explaining what I mean by an “Idea Person.” To me, an idea person is someone who just has a knack for thinking of things that would make the world a better place (or at least make things easier for some people). They don’t necessarily need to be a scientist on the order of Albert Einstein, but they should be people who are constantly thinking. Something like, “Wouldn’t it be great if in men’s rooms in bars, there was padding above the urinals so men wouldn’t hit their heads while relieving themselves.”. It’s just an idea.

Quite often, people don’t get those ideas out fast enough and they lose them. It even could be because their minds move so quickly that they forget to jot it down. Enter age of instant gratification. If you have an idea, Tweet it. Of course you could blog about it, but then people may only get it when they read your feed or whenever they get around to checking your blog. But with Twitter, its an almost instantaneous media connection. All it takes is one person who is highly followed in Twitter to retweet your idea(s) and you instantly have high visibility.

Why does this matter? Well I’m glad you asked. Because an idea person may not always have the want or even the means to implement the ideas, but with the connections and viral dispersion of information that Twitter provides, someone somewhere will have the means and may share your desire. Someone may even be able to point out the fact that the project or idea already exists (or is in production). Who knows you might just end up finding a new business partner on Twitter if you follow the right people and the right people follow you.

Social Media Information Propagation

Tuesday, May 12th, 2009

This morning I read the news story Irish Student Hoaxes World Media With Fake Quote. To summarize the article, an Irish student put a few quotes on Wikipedia on the page of a composer who had recently passed away to see how quickly people would use them. He made up the quotes and they were quickly on the editorial sheets.

The point is that we are all too quickly grabbing information without verifying. Although Wikipedia provides an invaluable service to the online community, it is all to easy to abuse. It seems as though writers have forgotten the scientific part of their career; fact checking. Although I am not a journalist, nor will I ever be, I think that sacrificing fact checking in order to make a deadline may be the wrong approach.

This is just my point from the perspective of Wikipedia. Let’s take this from another social media perspective like Twitter. For example, let’s say that someone wrote on Twitter:

RT @mattcutts Google will no longer honor the rel=”nofollow” aspect of linking

This could cause a pretty big uproar. There will be a massive amount of Tweeting both letting people know that Matt did not say this as well as people blindly retweeting this. Blog entries will show up saying why Google shouldn’t do that. Matt Cutts will likely have to write a blog entry saying he said no such thing. And I am sure all sorts of other hilarity will ensure. The speed of information in this day and age is so fast that misinformation can quickly wreak havoc. This is also a testament to the fact that people are generally more likely to spread negative information than positive information.

And to think all of this could have been avoided by a simple fact check by the first person who did an RT (after the person who made up the quote). And although it would be an interesting social experiment to test such a fact (as above), I think I’ll pass. Just keep in mind, fact checking is not something that should be left by the wayside.

Trying Out Twitter

Friday, March 27th, 2009

So I have finally decided to stop being a luddite about Twitter and give it a shot.

I went to a Limenal Group event held by friend Scudder Fowler. He had 3 speakers all talking about CRMs (Customer Relationship Management) and how they relate to social media. The event was great. The speakers were Penelope Trunk, Paul Greenberg, and David Van Toor.

I have always been pretty apprehensive (read against) with regard to Twitter. But with prominent and viral it is, I decided to give it a shot. Point is, if your interested in following me, check me out: http://twitter.com/elubow. Feel free to hit me up on Twitter and try to make me a permanent convert.

The Next Step In Browser Evolution

Tuesday, September 2nd, 2008

I was having a chat with my two friends from Redub Consulting about the new Google Chrome browser. At a cursory exploration, we found that (as promised) the Javascript engine is incredibly fast. But I don’t want to throw that out there since Google already us that in their Chrome Comic. I want to talk about where this could be leading.

As some of you know, Adobe Air is a desktop application that can interact with internet applications. The catch here is that since its a desktop application, it has access to the same elements of the physical machine as any other desktop application (USB ports, printers, sound/video out ports, etc). Browsers don’t yet that kind of access to a computer. They are limited to the user space in which they are run in. All the sound and video you hear and see is sent through 3rd party applications within the browser. What if the browser could control those elements of your machine? What if your entire computer experience was now internet based. Google is already trying to push this with software as a service (GoogleDocs), but keep extending this idea. What if your media center could be controlled via an internet application?

Eclipse IDE is now at a point at which you can your code as its running and change function calls at the opcode level to avoid recompiling your program over and over. Eclipse has grown to the point where its almost like an OS in its capabilities. In that same vein, Google’s new browser now controls its individual tabs and sandboxes each tab in order to have task level control over potentially runaway web applications.

So what I am trying to say here? I’m glad you asked. I believe this browser is the next step towards ubiquitous computing in the sense that 1 application to control your internet (or whole user) experience. AppleTV for instance is a set of specially designed hardware that can be interacted with over the internet. By allowing applications, such as Air (and potentially soon Chrome) to internet directly with the hardware attached to the computer, you are are negating the need for that specially designed hardware. One piece of hardware can be designed to do it all in terms of the interactive experience. Google is stepping to the plate and pushing forward for just this type of innovation. Keep an eye on the features of Google Chrome to come. If it becomes integrated any deeper into the desktop, it will open up a new age of ubiquitous computing.

Underused Tools

Monday, March 5th, 2007

There are a lot of tools for administration and networking that generally go unused. They are very helpful in both diagnostics and general administration. There are even some tools that come installed with linux and go unused and unheard of. Here I am going to cover a mere few of my favorite and hope that they work for you as well.

  1. traceproto
    The first tool I want to cover is one of my favorite tools when writing firewall scripts and is a close relative of traceroute; it’s called traceproto. traceproto doesn’t come installed by default on most linux systems. It is a replacement (or even just a complement) for traceroute that goes the extra mile. Like traceroute, you can change ports and ttl (time to live) on your queries. But the extra mile appears where you can specify whether to use tcp, udp, or icmp when you specify the ports. You can also specify the source port of the network traffic.
    The way that I make best use this tool is when I am writing firewall scripts. For instance, when I allow ntp through on a firewall, it can sometimes be difficult to test if my firewall rules are letting the packets through (since I have multiple levels of firewalls). Therefore, I use traceproto as follows (ntp is on udp port 123):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    root@tivo:~# traceproto -d 53 -p udp ns1.myserver.com
    traceproto: trace to ns1.myserver.com (1.2.3.4), port 53
    ttl  1:  ICMP Time Exceeded from 192.168.1.1 (192.168.1.1)
            0.83300 ms      0.67900 ms      0.71300 ms
    ttl  2:  ICMP Time Exceeded from 10.75.128.1 (10.75.128.1)
            11.577 ms       6.1550 ms       6.4960 ms
    ... Removed for brevity ...
    ttl  11:no response     no response     no response
    ttl  12:  UDP from myserver.com (1.2.3.4)
            132.07 ms       126.28 ms       125.88 ms

    hop :  min   /  ave   /  max   :  # packets  :  # lost
    -------------------------------------------------------
      1 : 0.67900 / 0.74167 / 0.83300 :   3 packets :   0 lost
      2 : 6.1550 / 8.0760 / 11.577 :   3 packets :   0 lost
      3 : 5.9680 / 7.0697 / 7.6650 :   3 packets :   0 lost
      4 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
      5 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
      6 : 8.8930 / 12.198 / 15.810 :   3 packets :   0 lost
      7 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
      8 : 9.2340 / 24.556 / 32.438 :   3 packets :   0 lost
      9 : 9.8230 / 13.669 / 18.890 :   3 packets :   0 lost
     10 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
     11 : 0.0000 / 0.0000 / 0.0000 :   0 packets :   3 lost
     12 : 125.88 / 128.08 / 132.07 :   3 packets :   0 lost
    ------------------------Total--------------------------
    total 125.88 / 22.834 / 132.07 :  21 packets :  15 lost
  2. pstree, pgrep, pidof
    Although these are 3 separate tools, they are all very handy for process discovery in their own right.

    To take advantage of of the pidof command, you just need to figure out which program you want to know about its family (parent and children). 2 ways to demonstrate this would be to use either kthread or apache2 as follows:

    1
    2
    3
    4
    # pidof apache2
    29297 29291 29290 29289 29245 29223 29222 29221 20441
    # pidof kthread
    6

    By typing pstree, you will see exactly what it is capable of. pstree outputs an ASCII graphic of the process list by separating it into parents and children. By adding the -u option to pstree, you can see if your daemons made their uid transitions. This is also an extremely useful program for displaying SELinux context of each process (by using the -Z option if pstree was built with it). To see the children of kthread which we found above was pid 6, we can use these commands in conjunction.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # pstree `pidof kthread`
    kthread-+-aio/0
            |-kacpid
            |-kblockd/0
            |-kgameportd
            |-khubd
            |-kmirrord
            |-kpsmoused
            |-kseriod
            `-2*[pdflush]

    And finally pgrep. There are many ways to make use of pgrep. It can be used like pidof:

    1
    2
    # pgrep -l named
    18935 named

    We can also list all processes that are being run that aren’t being controlled by controlling port 1 (pts/1):

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    # pgrep -l -t pts/1 -v
    1 init
    2 ksoftirqd/0
    3 watchdog/0
    4 events/0
    ... Removed for brevity ...
    10665 getty
    18975 named
    19009 qmgr
    25447 sshd
    25448 bash
    29221 apache2
  3. tee
    There are sometimes commands that can take a long time to run. You want to see the output, but you also want to save it for later. How can we do that. We can use the tee command. This sends the output to STDOUT and send (or append) to a filehandle. For simplicity, I will show you an example of tee using an df.

    1
    df -h | tee -a snap_shot
  4. tac
    Everyone knows about cat, it’s what we use to list the entire contents of a file. cat has a little known cousin that is usually installed by default on a system called tac. It prints the entire contents of a file in reverse.
  5. fuser
    fuser displays the process id of all processes using the specified file or file system. This has many handy uses. If you are trying to unmount a partition and want to know why its still busy, then run fuser on the filesystem and find out which processes are still using the device. fuser is even nice enough to tell you what kind of files are using the files or file systems. For example, I want to umount /root/, but I can’t and I don’t know why:

    1
    2
    # fuser /root/
    /root:          29475c 29483c

    Hmm, c means that I am currently in the directory. Maybe I need to watch what I’m doing.

Most of these tools don’t fall into the same category, but they are all useful in their own right. I hope you can make as good use of them as I do. There are many more little known tools that come with many linux installs by default and this is a just a few of the common ones that I take advantage of on a regular basis.

Super Security vs. Ease of Use

Monday, February 5th, 2007

I think I am going to get back onto my soapbox about being extraordinarily secure, only this time, I am going to compare it to ease of use. I would once again like to reiterate the fact that I am strongly for security in all its aspects. However, some people get into the presence of a security individual and freak out. They start saying that they know certain things are secure and insecure and then do them anyway.

A great example of this is SSH access to servers. Consider the following physical network layout.

                      ,---------- Server I
Inet---------|F/W|---+----------- Server II
                      `----------- Server III

If you want to leave certain nice to do’s or ease of use functionality available to your self such as leaving SSH open only to root or having a machine with anonymous FTP access available, then take a slightly different approach to securing your environment (or those particular machines): layered security. Without changing the physical layout of your network, change the network layout using iptables and/or tcp wrappers. Make the network look more like this:

                             ,------Server II
Inet-----|F/W|----Server I--<
                             `------Server III

This is essentially saying that all traffic that you want to funnel to Server II or Server III will now go through server I. This can be used in a variety of ways. Let's say that all 3 servers are in the DMZ (De-Militarized Zone) and that Server's II and III are hosting the services available to the outside. Allowing direct access to them via SSH or FTP probably isn't the best idea because these are services that allow files to be changed. So what can we do to change this?

First let's configure TCP wrappers. Starting out with the assumption that you are being paranoid where necessary, let's set SSH up to only allow incoming connections from Server I and deny from everywhere else. In the /etc/hosts.deny file on Server II, add the following lines (everything that goes for Server II goes for Server III to mimic the setup):

1
2
sshd1: ALL
sshd2: ALL

Now in the /etc/hosts.allow file on Server II, add the IP address of Server I (IPs are all made up):

1
2
sshd1: 167.4.5.23
sshd2: none

This now ensures that the only way SSH traffic will be allowed Server II is through (or from) Server I. But let's say that isn't enough for you. Let's say you want a little more security so you can sleep a little better at night. Enter netfilter and IPTables. On Server II, add the following to your firewall script:

1
2
3
4
iptables -A INPUT -p tcp --dport 22 -j SSH  # Send to the SSH chain
iptables -A SSH -s 167.4.5.23/32 -j ACCEPT # Allow from Server I
iptables -A SSH -m limit --limit 5/s -j LOG # Log the problems
iptables -A SSH -j DROP # Drop everything else

So what's the point of all this extra configuration? Simple, it allows for a little more flexibility when it comes to your setup. Although I recommend having SSH keys and not allowing direct root access from anywhere, you can get away with a little more. You can allow root access via an SSH key. And if you have enough protections/layers/security in place, you may also even consider using a password-less SSH key depending on what the role of the contacting server is (ie. rsync over SSH).

In the optimal version of a network setup with SSH, you may want to only allow user access to Server II via entry only through Server I. Then to top it off, only allow sudo access to root if the user is in the sudoers file. This throws a lot more protections behind Server II, but makes it somewhat complicated to just do something simple. This is especially true if Server II is on an internal network which isn't very accessible anyway. The advice I generally give is in the form of the question, "What is tradeoff?" More times than not, ease of use the answer. So keeping in mind that ease of use isn't out of the realm of possibility, just remember that layered security can be your friend. It's usually how the tradeoff comes.

Some Introduction

Friday, February 2nd, 2007

First off I’d like to thank Dancho Danchev for the mention in his blog entry PR Storm. Part of me creating a blog was no thanks to reading his and reposting it to Linux Security. I also plan on commenting a little on some of the things he has to say. Now that I have started a new job, I have a little more time to do such things.

I originally started my blog to keep track of a lot of things that I do on a regular basis. Now I feel like I have the time to share my ideas and make them slightly more readable for others and possibly even usable. So the likely thing to do is to cover things that I know: Perl, System’s Administration, Privacy, and Security. We’ll see where it goes from there.

My goal is to publish something at least once every other day (or 3 – 4) per week. If I have more time, I will put more up. If anyone has any topics that they would like me to cover, please feel free to let me know. My email addresses is on my site (eric AT lubow dot org). Thanks, Enjoy.

Patching Procedure vs. Exploitation Potential

Thursday, January 25th, 2007

When you talk to many security experts, they pretty much agree that when a vulnerability hits, that it’s necessary that it be patched and that its only a matter of time until the sh*t hits the fan and some real knowledgable black hat has put something together for the script kiddies to play with. But a lot of people seem to forget every time a patch is required on a production system that there is due process that system administrators must go through. One of the primary steps is simply evaluation.

The primary questions that needs to be evaluated are:

What is the likelihood of the vulnerability being exploited or the damage that could be caused if it is exploited?

vs.

How long will it take to apply the patch, test it, implement it, then deploy it to the production environment? What kind of impact will that have on the production servers in terms of outages/downtime? Will it break anything else?

Let’s take some time to break these down. I have always found that the easiest way for most people to understand a problem is to use an example. I don’t want to single out phpBB, but since it recently came up and spurred a necessary conversation, I will use it for my example. The advisory that I am referencing is available here from Bugtraq.

At one of the many websites I run, I administer a phpBB forum. The forum is relatively low volume, but high volume enough to attract spammers which means its likely that it also attracts hackers (of the black hat variety). The phpBB version is 2.0.21. For a few reasons, we have not only modified some of the source code of phpBB, but we have also added plugins. For anyone who has any experience adding plugins into phpBB, you know that its akin to chewing glass (to say the least). Even though we version track in CVS, it would still be somewhat of a PITA to update to 2.0.22. The process would be something along the lines of:

Import the new version into the old version with the changes into CVS. See if it makes sense to resolve the conflicts. If so, resolve the conflicts and begin testing. If not, figure out how to duplicate the changes in the previous version (2.0.21) in the new version (2.0.22). Once that’s been done, then add the plugins that were installed in the old version into the new version. Come up with a transition plan for the production server. Back up the data and do a few test runs of the transition on the development box. Then schedule the outage time and do the turnover to the new server. Then pray everything goes ok for the transition. Simple, No?

The point of going through that lengthy explanation was to demonstrate that the upgrade process may not be as simple (in a lot of cases) as:

1
apt-get update && apt-get upgrade

The exploit itself requires a user to create a shockwave flash file with certain parameters, then put it into a specific web page with certain parameters, and then it must be private messaged (emailed) to someone who is already signed into the board (has an active cookie).

Many security experts would tell you that, “It’s a vulnerability, it needs to be patched immediately.” Well, let’s do that evaluation thing I was referring to earlier. How likely is it that someone is going to take the time to create that flash file. And even if someone does go to that trouble, what’s to say that if a user (or the admin) receives the message in an email, that they are going to visit the site and watch the video?

My colleague was asserting that it’s out there on the internet and needs to be protected. And to that extent, I certainly agree. However, the amount of time that it would take to make all those changes, test them, and deploy the changes to the production server far outweighs the possibility of the application being exploited.

When I first started out in security, I took the approach, “It’s a vulnerability…Security at all costs.” Now I have learned that sometimes one needs to balance out time vs. need vs. priority. So I encourage System Administrators to think before jumping into situations like that. Think of how much more work could be accomplished in the time that would have been spent trying to patch something that probably wouldn’t have been exploited to begin with.

Happy Hanukah

Thursday, December 21st, 2006

I don’t often write poetry. In fact, I am not usually a great writer of poetry (since I don’t read much of it anyway). However, I do, for some odd reason, find it fun to write programming poetry and Perl just makes it so easy. So admist all the holiday cheer (mostly Christmas) and in light of seeing this Christmas Poem, I decided to throw together my own version for the Jewish population. Enjoy and Happy Hanukah!

BEGIN {
  if ($kids) { write $santa; dump $kids; }
  foreach (@night) { study $prayers and $stories; }
  select $gentiles; #to
    join "you"; if ($gentiles) { open $MIND,"ed"; }
}

foreach $night (1..8) {
  wait until $sundown;
  y/light/candles/;
  tell $oil_story;
  seek $matches, $candle, 0;
  bless $candle, $prayers;

  LIGHTING: {
    $candle unless $lit; next $candle;
    redo LIGHTING until last, menorah, is, lit;
  }
  goto FOOD unless ($young_children);
  join $gifts, $child;
  open ALL, $gifts;

  FOOD:
    do {
      unpack "FOOD", $table;
      chop $potato_latkes;
      sort @tater_tots;
      chop $brisket;
      open $applesauce, "packages";
      sin unless (bagles and lox); exists $on{table};
      require fork;
      if (exists $jewish{mother}) { use constant feeding, "rules"; };
      while ($jewish_household) { }
        continue { $eating until ($stomach > $full); }
    } while ($food eq "Kosher");

  TRADITIONS: {
    tell $stories;
    listen $stories, $history;
    setpriority $children, $jewish{spouse}, $marriage;
    do { $jewish_geography if (local $family); }
  }
  accept $gelt, $family;
  push @dreidel, $around_room;
  read($it,$happened,$over_there) unless (local $Israel);
}

do $mitzvot until $tired;

require $all; tell $everyone; { "Happy Hanukah"; }