Patching Procedure vs. Exploitation Potential

When you talk to many security experts, they pretty much agree that when a vulnerability hits, that it’s necessary that it be patched and that its only a matter of time until the sh*t hits the fan and some real knowledgable black hat has put something together for the script kiddies to play with. But a lot of people seem to forget every time a patch is required on a production system that there is due process that system administrators must go through. One of the primary steps is simply evaluation.

The primary questions that needs to be evaluated are:

What is the likelihood of the vulnerability being exploited or the damage that could be caused if it is exploited?

vs.

How long will it take to apply the patch, test it, implement it, then deploy it to the production environment? What kind of impact will that have on the production servers in terms of outages/downtime? Will it break anything else?

Let’s take some time to break these down. I have always found that the easiest way for most people to understand a problem is to use an example. I don’t want to single out phpBB, but since it recently came up and spurred a necessary conversation, I will use it for my example. The advisory that I am referencing is available here from Bugtraq.

At one of the many websites I run, I administer a phpBB forum. The forum is relatively low volume, but high volume enough to attract spammers which means its likely that it also attracts hackers (of the black hat variety). The phpBB version is 2.0.21. For a few reasons, we have not only modified some of the source code of phpBB, but we have also added plugins. For anyone who has any experience adding plugins into phpBB, you know that its akin to chewing glass (to say the least). Even though we version track in CVS, it would still be somewhat of a PITA to update to 2.0.22. The process would be something along the lines of:

Import the new version into the old version with the changes into CVS. See if it makes sense to resolve the conflicts. If so, resolve the conflicts and begin testing. If not, figure out how to duplicate the changes in the previous version (2.0.21) in the new version (2.0.22). Once that’s been done, then add the plugins that were installed in the old version into the new version. Come up with a transition plan for the production server. Back up the data and do a few test runs of the transition on the development box. Then schedule the outage time and do the turnover to the new server. Then pray everything goes ok for the transition. Simple, No?

The point of going through that lengthy explanation was to demonstrate that the upgrade process may not be as simple (in a lot of cases) as:

apt-get update && apt-get upgrade

The exploit itself requires a user to create a shockwave flash file with certain parameters, then put it into a specific web page with certain parameters, and then it must be private messaged (emailed) to someone who is already signed into the board (has an active cookie).

Many security experts would tell you that, “It’s a vulnerability, it needs to be patched immediately.” Well, let’s do that evaluation thing I was referring to earlier. How likely is it that someone is going to take the time to create that flash file. And even if someone does go to that trouble, what’s to say that if a user (or the admin) receives the message in an email, that they are going to visit the site and watch the video?

My colleague was asserting that it’s out there on the internet and needs to be protected. And to that extent, I certainly agree. However, the amount of time that it would take to make all those changes, test them, and deploy the changes to the production server far outweighs the possibility of the application being exploited.

When I first started out in security, I took the approach, “It’s a vulnerability…Security at all costs.” Now I have learned that sometimes one needs to balance out time vs. need vs. priority. So I encourage System Administrators to think before jumping into situations like that. Think of how much more work could be accomplished in the time that would have been spent trying to patch something that probably wouldn’t have been exploited to begin with.

  • David French

    I can’t agree more.

    Too many people want to patch everything now. They don’t want to analyze the issue. What is the risk? What is the likelihood that it will be exploited? What is the impact of applying the change?

    Depending on the product, I can have anywhere from 3 to more than 5 environments from development through production, possibly including dev, test, qa, perf, and prod. Oops, no time, let’s just patch production and hope for the best. I hear this way too often from management. Or we patch all non-production systems in one day and then apply the patch to production the next. Makes me wonder why we have the other environments if we aren’t testing, qa’ing, etc. It almost comes down to the patch installed and the server is still running, let’s apply to prod!

    Its even worse when you add in SOX and PCI, which seem to force people to the patch often at any cost, though they don’t say that. They just require patches in a small period after a vendor releases them.

    Seems to me the patch, patch, patch mentality in itself can lead to less security since testing, etc. often gets ignored.

    One time I caught a problem based on an su bug where local users could exploit the program and become root. While su from the vendor is executable by anyone, on our systems we limit access to a small number of users. Well our internal SOX security group forced the patch installation. Sounds good that we plugged a locally exploitable hole. Problem was, the vendor patch changed the permissions on the executable back to allow anyone to run it. Until I found the issue on a co-workers system, we were running a program that was limited to a small set of trusted users, now runable by anyone. While it could no longer be exploited, it opened up its use to everyone. Hm, which is worse? What was my risk with 5 trusted people being able to exploit root, which they already had. Now I had systems where any user could possibly become any other through su. The later ended up being a bigger problem as it allowed improper access now to other accounts. While sudo was still being used and logging work, some people were now bypassing sudo and using su directly. No logs were available for auditting when su was used. Problem was SOX controls required it. So we ended up patching a non-issue based on SOX requirements that broke what was important for SOX, the auditting…

    Don’t get me wrong. I am all for security patching when warrented. Its just as you said. You can’t blindly apply patches. Any change, whether an upgrade, security patch, or enhancement needs to be analyzed for whether it makes sense. All systems are not the same, which is what this gets down to. Most security people treat them like they are.