Archive for the ‘ Security ’ Category

Google Securing The Web One Discrete Monopolizing Push At A Time

Friday, November 4th, 2011

Contrary to speculation by some, Google’s decision for encrypting search data is motivated by the goal to make the web as a whole more secure and it’s not driven by economic interests. I think Google is silently forcing the internet to do what they should be doing on their own.
(more…)

Fixing CentOS Root Certificate Authority Issues

Wednesday, June 1st, 2011

While trying to clone a repository from Github the other day on one of my EC2 servers and I ran into an SSL verification issue. As it turns out, Github renewed their SSL certificate (as people who are responsible about their web presence do when their certificate is about to expire). As a result, I couldn’t git clone over https. This presents a problem since all my deploys work using git clone over https.
(more…)

Joe Job and SPF

Tuesday, March 27th, 2007

First off, get your mind out of the gutter. A joe job has absolutely nothing to do with what you’re thinking about. It’s email related and it can be a pain in the ass to deal with.

What is a Joe Job?
Joe Job is the term used to describe the act of forging bulk email to appear to the recipient as if the email were coming from the victim. Generally speaking, this term is used to describe an attack of this nature. This is to say that when a spambot or botnet sends a massive amount of email to a victim. The named was coined by an attack launched against http://www.joes.com/ in January of 1997. The perpetrator (SPAMMER) sent a flood of emails from spoofed addresses in a (successful) attempt to enrage the recipients to take action against the company.

Why do I care?
There are many reasons, but I will just cover a few until you get the picture. The main victim of a SPAM attack of this nature ends up having an INBOX full of junk. This junk can potentially include malware, virii, and any number of phishing or scam based attacks. Also, since there is so much email traversing the connection, the bandwidth gets sucked up and depending on the actual amount of SPAM coming in, could render the connection unusable until all the mail is filtered through. The problem comes in when there are thousands of messages, that could take days or even weeks. Since the originating address is spoofed, those who don’t know are going to get very upset with who they *believe* to be responsible for sending the email. The last item I am going to touch on is that the person whose email address was spoofed now has to deal with all the auto-responses and whatever else may automatically come their way. (I think you get the idea).

What I can do?
There is nothing that you can do to completely avoid it besides not using the internet or email. There are some steps that you can take. One of the first things is to take a look at SPF (Sender Policy Framework). To set this up in DNS, you need to do the following:

In your DNS zone file for server.com, you should add something like the following:

1
server.com.  IN TXT    "v=spf1 a mx -all"
  • v – The version of SPF to use
  • a mx – The DNS attributes permitted to send messages for server.com
  • -all – Reject everything else that does match a or mx

This can also get more in depth depending on the number of email accounts you have and from where. For instance, let’s say your mail server’s name is mail.server.com and you also have email accounts on gmail (gmail.com)and your work email (myjob.com). Your line would look something similar to the following:

1
server.com.   IN   TXT   "v=spf1 mx a:mail.server.com include:gmail.com include:myjob.com -all"

The a line is saying that mail.server.com is authorized to send mail via your mail server. The include statements are basically saying that everything considered legitimate by either gmail.com or myjob.com should also be considered legitimate by you.

There is a lot more information on configuring SPF. The documentation should be read thoroughly as improperly configured SPF can prevent legitimate email from flowing. For more information of SPF and configuring it, check out:

SPF is just one method that can be used to fight against being a victim of a Joe job. You should always be using some method of SPAM filtering in addition to SPF. Layered security needs to be the approach when locking down any type of server or service.

Super Security vs. Ease of Use

Monday, February 5th, 2007

I think I am going to get back onto my soapbox about being extraordinarily secure, only this time, I am going to compare it to ease of use. I would once again like to reiterate the fact that I am strongly for security in all its aspects. However, some people get into the presence of a security individual and freak out. They start saying that they know certain things are secure and insecure and then do them anyway.

A great example of this is SSH access to servers. Consider the following physical network layout.

                      ,---------- Server I
Inet---------|F/W|---+----------- Server II
                      `----------- Server III

If you want to leave certain nice to do’s or ease of use functionality available to your self such as leaving SSH open only to root or having a machine with anonymous FTP access available, then take a slightly different approach to securing your environment (or those particular machines): layered security. Without changing the physical layout of your network, change the network layout using iptables and/or tcp wrappers. Make the network look more like this:

                             ,------Server II
Inet-----|F/W|----Server I--<
                             `------Server III

This is essentially saying that all traffic that you want to funnel to Server II or Server III will now go through server I. This can be used in a variety of ways. Let's say that all 3 servers are in the DMZ (De-Militarized Zone) and that Server's II and III are hosting the services available to the outside. Allowing direct access to them via SSH or FTP probably isn't the best idea because these are services that allow files to be changed. So what can we do to change this?

First let's configure TCP wrappers. Starting out with the assumption that you are being paranoid where necessary, let's set SSH up to only allow incoming connections from Server I and deny from everywhere else. In the /etc/hosts.deny file on Server II, add the following lines (everything that goes for Server II goes for Server III to mimic the setup):

1
2
sshd1: ALL
sshd2: ALL

Now in the /etc/hosts.allow file on Server II, add the IP address of Server I (IPs are all made up):

1
2
sshd1: 167.4.5.23
sshd2: none

This now ensures that the only way SSH traffic will be allowed Server II is through (or from) Server I. But let's say that isn't enough for you. Let's say you want a little more security so you can sleep a little better at night. Enter netfilter and IPTables. On Server II, add the following to your firewall script:

1
2
3
4
iptables -A INPUT -p tcp --dport 22 -j SSH  # Send to the SSH chain
iptables -A SSH -s 167.4.5.23/32 -j ACCEPT # Allow from Server I
iptables -A SSH -m limit --limit 5/s -j LOG # Log the problems
iptables -A SSH -j DROP # Drop everything else

So what's the point of all this extra configuration? Simple, it allows for a little more flexibility when it comes to your setup. Although I recommend having SSH keys and not allowing direct root access from anywhere, you can get away with a little more. You can allow root access via an SSH key. And if you have enough protections/layers/security in place, you may also even consider using a password-less SSH key depending on what the role of the contacting server is (ie. rsync over SSH).

In the optimal version of a network setup with SSH, you may want to only allow user access to Server II via entry only through Server I. Then to top it off, only allow sudo access to root if the user is in the sudoers file. This throws a lot more protections behind Server II, but makes it somewhat complicated to just do something simple. This is especially true if Server II is on an internal network which isn't very accessible anyway. The advice I generally give is in the form of the question, "What is tradeoff?" More times than not, ease of use the answer. So keeping in mind that ease of use isn't out of the realm of possibility, just remember that layered security can be your friend. It's usually how the tradeoff comes.

10 More Tips Towards Securing Your Linux System

Wednesday, January 31st, 2007

Since everyone seemed to enjoy my first round of tips and tricks to securing a linux system, I figured I would throw together a few more. Enjoy.

  1. There are files that get changed very infrequently. For instance, if your system won’t have any users added anytime soon then it may be sensible to chattr immutably the /etc/password and /etc/shadow files. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
    1
       chattr +i /etc/passwd /etc/shadow
  2. Password protect your linux install with LILO. Edit your /etc/lilo.conf. At the end of each linux image that you want to secure, put the lines:
    1
    2
    3
       read-only
       restricted
       password = MySecurePassword

    Ensure you rereun /sbin/lilo so the changes take effect.

  3. Users who have sudoer (sudo) accounts setup can have the account setup to change to root without a password. To check this, as root use the following command:
    1
       grep NOPASSWD /etc/sudoers

    If there is an entry in the sudoers file, it will look like this:

    1
       eric    ALL=NOPASSWD:ALL

    To get rid of this, type visudo and remove the line in that file.

  4. Use sudo to execute commands as root as a replacement for su. In the /etc/sudoers file, add the following lines by using the visudo command:
    1
    2
       Cmnd_Alias LPCMDS = /usr/sbin/lpc, /usr/bin/lprm
       eric    ALL=LPCMDS

    Now the user ‘eric’ can sudo and use the lpc and lprm commands without having any other root level access.

  5. Turn off PasswordAuthentication and PermitEmptyPasswords in the SSH configuaration file /etc/ssh/sshd_config. This will ensure that users cannot set empty passwords or login without SSH keys.
    1
    2
       PermitEmptyPasswords no
       PasswordAuthentication no
  6. Instead of using “xhost +” to open up access to the X server, be more specific. Use the server name that you are allowing control to:
    1
       xhost +storm:0.0

    Once you are done using it, remember to disallow access to the X server from that host:

    1
       xhost -storm:0.0
  7. To find out the .Xauthority magic cookie looks like and to send it (authorization information) to the remote host, use the following command:
    1
       xauth extract - $DISPLAY | ssh storm xauth merge -

    Now the user who ran this command on the original host can now run xcilents on storm. xauth needs to be present on both hosts.

  8. To turn on spoof protection, run a simple bash script:
    1
       for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 1 > $i done;

    Be careful to remember that it drops packets more or less ‘invisibly’.

  9. A SYN-flood attack has the ability to bring the network aspect of your linux box to a snail like crawl. TCP_SYNCookies protection attempts to stop this from taking a heavy toll on the machine. To enable cp_syncookies
    protection, use the following command:

    1
       echo 1 > /proc/sys/net/ipv4/tcp_syncookies
  10. When possible use secure connection methods as opposed to insecure methods. Unless you are required to use telnet, substitute ssh (Secure SHell) in for rsh or telnet. Instead of POP3 or IMAP use SPOP3 or SIMAP (IMAPS). Both SIMAP and SPOP3 are just versions of IMAP and POP3 running over an SSL (Secure Socket Layer) tunnel.

10 Tips To Start Securing Your Linux System

Monday, January 29th, 2007

A while back I had been asked to write a few quick tips that as an administrator, one would find helpful. They published in one form or another and are now available here. There are MANY more, but these are just a few. Enjoy.

  1. Users who may be acting up or aren’t listening can still be controlled. Using a program called ‘skill’ (signal kill) which is part of the ‘procps’ package.
    1
    2
    3
    4
       Halt/Stop User eric: skill -STOP -u eric
       Continue User eric: skill -CONT -u eric
       Kill and Logout User eric: skill -KILL -u eric
       Kill and Logout All Users: skill -KILL -v /dev/pts/*
  2. Make use of security tools out there to test your server’s weaknesses. Nmap is an excellent port scanning tool to test to see what ports you have open. On a remote machine, type the command:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
       # nmap -sTU <server_ip>

       Starting nmap 3.70 ( http://www.insecure.org/nmap/ ) at 2006-08-10 13:51 EST
       Interesting ports on eric (172.16.0.1):
       (The 3131 ports scanned but not shown below are in state: closed)
       PORT    STATE         SERVICE
       22/tcp  open          ssh
       113/tcp open          auth

       Nmap run completed -- 1 IP address (1 host up) scanned in 221.669 seconds
  3. On a production server that is in a common area (although this should not be the case, some situations are inevidable). To avoid an accidental CTRL-ALT-DEL reboot of the machine, do the following to remove the necessary
    lines from the /etc/inittab file:

    1
       # sed -i 's/ca::ctrlaltdel:/#ca::ctrlaltdel:/g' /etc/inittab
  4. Two SSH configuration options that can be set to improve security should be checked on your production server. UsePrivilegeSeparation is an option, when enabled will allow the OpenSSH server to run a small (necessary) amount of code as root and the of the code in a chroot jail environment. StrictModes checks to ensure that your ssh files and directories have the proper permissions and ownerships before allowing an SSH session to open up. The
    directives should be set in the /etc/ssh/sshd_config as follows:

    1
    2
       UsePrivilegeSeparation yes
       StrcitModes yes
  5. The default umask (usermask) on most systems should be 022 to ensure that files are created with the permissions 0644 (-rw-r–r–). To change the default umask setting for a system, edit /etc/profile to ensure that you umask is appropriate for your setup.
  6. Some users like to have a passwordless account. To check this you need to look at the /etc/shadow account with the following command line:
    1
    awk -F: '$2 == "" { print $1, "has no password!" }' /etc/shadow
  7. Just in case someone else who has access to the superuser account decided to alter the password file and potentially make themselves a superuser. This is a method to check:
    1
       awk -F: '$3 == 0 { print $1, "is a superuser!" }' /etc/passwd
  8. Setuid and Setgid files have the potential to be very hazardous if they are accessilbe by the wrong users on the system. Therefore it is handy to be able to check with files fall into this category.
    1
       find /dir -xdev -type f -perm +ug=s -print
  9. World writable files can be left around by users wanting to make things easier for themselves. It is necessary to be careful about who can write to which files. To find all world writable files:
    1
       find /dir -xdev -perm +o=w ! \( -type d -perm +o=t \) ! -type l -print
  10. Some attackers, prior to attacking a host, (or users nmaping a host) will check to see if the host is alive. They do this by ‘ping’ing the host. In order to check if the host is up, they will use an ICMP echo request packet.
    To disallow these types of packets, use iptables:

    1
       iptables -A INPUT -p icmp --icmp-type echo-request -j DROP

Patching Procedure vs. Exploitation Potential

Thursday, January 25th, 2007

When you talk to many security experts, they pretty much agree that when a vulnerability hits, that it’s necessary that it be patched and that its only a matter of time until the sh*t hits the fan and some real knowledgable black hat has put something together for the script kiddies to play with. But a lot of people seem to forget every time a patch is required on a production system that there is due process that system administrators must go through. One of the primary steps is simply evaluation.

The primary questions that needs to be evaluated are:

What is the likelihood of the vulnerability being exploited or the damage that could be caused if it is exploited?

vs.

How long will it take to apply the patch, test it, implement it, then deploy it to the production environment? What kind of impact will that have on the production servers in terms of outages/downtime? Will it break anything else?

Let’s take some time to break these down. I have always found that the easiest way for most people to understand a problem is to use an example. I don’t want to single out phpBB, but since it recently came up and spurred a necessary conversation, I will use it for my example. The advisory that I am referencing is available here from Bugtraq.

At one of the many websites I run, I administer a phpBB forum. The forum is relatively low volume, but high volume enough to attract spammers which means its likely that it also attracts hackers (of the black hat variety). The phpBB version is 2.0.21. For a few reasons, we have not only modified some of the source code of phpBB, but we have also added plugins. For anyone who has any experience adding plugins into phpBB, you know that its akin to chewing glass (to say the least). Even though we version track in CVS, it would still be somewhat of a PITA to update to 2.0.22. The process would be something along the lines of:

Import the new version into the old version with the changes into CVS. See if it makes sense to resolve the conflicts. If so, resolve the conflicts and begin testing. If not, figure out how to duplicate the changes in the previous version (2.0.21) in the new version (2.0.22). Once that’s been done, then add the plugins that were installed in the old version into the new version. Come up with a transition plan for the production server. Back up the data and do a few test runs of the transition on the development box. Then schedule the outage time and do the turnover to the new server. Then pray everything goes ok for the transition. Simple, No?

The point of going through that lengthy explanation was to demonstrate that the upgrade process may not be as simple (in a lot of cases) as:

1
apt-get update && apt-get upgrade

The exploit itself requires a user to create a shockwave flash file with certain parameters, then put it into a specific web page with certain parameters, and then it must be private messaged (emailed) to someone who is already signed into the board (has an active cookie).

Many security experts would tell you that, “It’s a vulnerability, it needs to be patched immediately.” Well, let’s do that evaluation thing I was referring to earlier. How likely is it that someone is going to take the time to create that flash file. And even if someone does go to that trouble, what’s to say that if a user (or the admin) receives the message in an email, that they are going to visit the site and watch the video?

My colleague was asserting that it’s out there on the internet and needs to be protected. And to that extent, I certainly agree. However, the amount of time that it would take to make all those changes, test them, and deploy the changes to the production server far outweighs the possibility of the application being exploited.

When I first started out in security, I took the approach, “It’s a vulnerability…Security at all costs.” Now I have learned that sometimes one needs to balance out time vs. need vs. priority. So I encourage System Administrators to think before jumping into situations like that. Think of how much more work could be accomplished in the time that would have been spent trying to patch something that probably wouldn’t have been exploited to begin with.

Configuring mod_security for EnGarde Secure Linux

Wednesday, January 24th, 2007

Introduction

This document is intended to guide a user through initially setting up and understanding a mod_security+Apache2 under EnGarde Secure Linux setup. Once you have completed reading this document, you should be able to understand the basics of mod_security, what it is used for, and why it may apply to you and your environment.

Why mod_security

The need for mod_security may not be initially apparent since we are all perfect programmers and rarely make a mistake that could prove hazardous to security. It may not be for you, but it is for the users of your servers who may not be as adept in creating web applications.

mod_security is a web application intrusion detection and prevention engine. It operates by ‘hook’ing itself into apache and inspecting all requests for your specific ruleset. It can be used to monitor your server with logging or even protect it by ”deny”ing attacks.

Skills Needed

You will need to have access to the WebTool and the GDSN Package Manager. You need to have shell access to the machine and the ability to use a text editor to make the necessary changes to the configuration files.

Installation

To install mod_security, go into the GDSN Manager in the Guardian Digital WebTool.

1
2
 System -> Guardian Digital Secure Network
 Module -> Package Management

Find the line that says libapache-mod_security and check the checkbox next to it. Click the Install Selected Packages button. Let the mod_security package install.

Configuration

Now its time to configure the mod_security package. The first thing that has to be done is to add the configuration file for mod_security (that we are going to create) to the apache2 configuration file. To accomplish this, ensure that the following line is somewhere in your /etc/httpd/conf/httpd.conf:

1
 Include conf/mod_security.conf

This ensures that when apache2 starts up, the configuration that you spcify in /etc/httpd/conf/httpd.conf will be loaded.

Basic Configuration

Once you have installed mod_security, it’s time for some basic configuration. In order to keep consistency, the mod_security.conf configuration file should be created in the /etc/httpd/conf/ directory. For a basic configuration (which we will walk through step-by-step), your /etc/httpd/conf/mod_security.conf file should looks as follows:

 LoadModule security_module /usr/libexec/apache/mod_security.so
 <IfModule mod_security.c>
   SecFilterEngine On
   SecFilterDefaultAction "log"
   SecFilterCheckURLEncoding On
   SecFilterForceByteRange 1 255

   SecServerSignature "Microsoft-IIS/5.0"

   SecAuditEngine RelevantOnly
   SecAuditLog /etc/httpd/logs/modsec_audit_log
   SecFilterDebugLog /etc/httpd/logs/modsec_debug_log
   SecFilterDebugLevel 0
 </IfModule>
SecFilterEngine

This directive turns on mod_security.

SecFilterDefaultAction

This directive decides what happens to a request that is caught by the mod_security filtering engine. In our case, we are going to log the request. By reading the documentation, you will find that there are many other options
available. By changing this line slightly (once you have logged and found out when and how the mod_security engine catches requests), you can deny requests and produce errors:

1
 SecFilterDefaultAction "deny,log,status:404"

This line denies the request, logs it to your log files, and send the requester back a HTTP status code 404 (also known as Page Not Found).

SecFilterCheckURLEncoding

This directive checks the URL to ensure that all characters in the URL are properly encoded.

SecFilterForceByteRange

This directive asserts which bytes are allowed in requests. The 1…255 specified in the example allows almost all characters. To bring this down to just the minimal ASCII character set, replace the above line with:

1
 SecFilterForceByteRange 32 126
SecServerSignature

This directive can be used to attempt to mask the identity of the apache server. Although this method works well, it is not 100% effective as there are other methods that can be used to determine the server type and version. It should be noted that for this to work, the Apache2 configuration variable ServerTokens should be changed from Prod (default) to Full so the line reads as follows:

1
 ServerTokens Full
SecAuditEngine

This directive allows more information about the methods of an attacker to be logged to the specified logfile. To turn this on to log every request object use the syntax:

1
 SecAuditEngine On

This is not very desirable as this produces a LOT of output. The more desirable version is the one used above:

1
 SecAuditEngine RelevantOnly

This logs only the interesting stuff that may be useful in back tracing the methods of an attacker.

SecAuditLog

This is the location of the audit log file. It is generally preferred to use absolute paths to files to ensure the correct path is being used.

SecFilterDebugLevel

This directive refers to the debug level logged to the specified logfile. The current value of 0 should be used on production systems. While the environment is in testing, a level of 1..4 should be used with increasing
verbosity between from 1 up to 4.

SecFilterDebugLog

This is the location of the audit log file. It is generally preferred to use absolute paths to files to ensure the correct path is being used.

We will add some lines to do some Selective filtering. Selective filters are used to handle some specific situations that cannot be targeted with site-wide policy. However you need to be careful of what you make site-wide policy since some of these security measures can break your current setup.

There are even more in depth uses where you can number rules and apply them to certain sets of directives and not to others. mod_security allows for very granular control. The in depth discussions on using these is beyond the scope of this document.

General

Since mod_security is a keyword driven engine, it will take the specified action on simple keyword matches. This is to say that anything that follows the directive SecFilter will engage the appropriate action. For example:

1
 SecFilter "&gt;applet"

If the <applet> tag appears anywhere in the request, then the log action specified above is taken.

XSS Attacks

To try to prevent some types of cross site scripting attacks, you can add the following lines to your configuration file:

1
2
 SecFilter "&lt;script"
 SecFilter "&lt;.+&gt;"

This tries to prevent Javascript injections or HTML injections.

Directory Traversal

Rarely will it be necessary for a user to traverse directories using the “../” construct. In order to prevent that, we can add the line:

1
 SecFilter "\.\./"
GET/HEAD Requests

With the use of these lines, we will not accept GET or HEAD requests that have bodies:

1
2
 SecFilterSelective REQUEST_METHOD "^(GET|HEAD)$" chain
 SecFilterSelective HTTP_Content-Length "!^$"
Unknown Requests

There are occasionally requests that come across (usually malicious) that we don’t know how to handle. At that point we let mod_security handle the request by adding the following lines:

1
 SecFilterSelective HTTP_Transfer-Encoding "!^$"

Conclusion

At this point, you should be capable of setting up a basic installation of mod_security. They are many more combinations of both simple and advanced techniques and directives that can be used to protect your server. By reading the documentation, you can have very granular control over your web server attack detection and prevention.

Originally Posted:

References