Archive for the ‘ lessons learned ’ Category

Choosing a Product By Roadmap

Friday, November 18th, 2011

There are a lot of reasons to choose a specific technology. You can decide based on what skills you or the engineers around you have. You can decide on a new technology because it’s the right tool. But there are times when all other things are equal and the flip of a coin would suffice. And in my mind, that’s when it comes to choosing the right technology based on a roadmap.
(more…)

Community Participation

Thursday, March 18th, 2010

The more I branch out my interests (or skill sets), the more I find myself joining communities. I am a part of Yelp (food/restaurants), StackOverflow (programming questions), Codaset (social coding), Facebook, LinkedIn (professional networking), Disqus (blog comment system), and the list goes on and on for many of my interests. There are lots of communities for almost all imaginable interests. The key thing here is not just that I am a part of these networks or communities that I am interested in, but that I am a contributor.
(more…)

Changing Shoes For A Redesign

Tuesday, March 9th, 2010

The best way to rethink things is to be in the shoes of your users. Use your app how they use your app. Try to take a fresh look at your application like you’ve never seen it before. Would you change the location of the menu/navigation? Would you change the actual menus/navigation? Would you add a shortcut search box where there wasn’t one before? Maybe you remove the advertising or move the place that the ads are located so that they are less intrusive…

The idea is that every so often you need to take a step back. Looking at your application from your users perspective may well change how your entire application works. I’m not saying this from a statistical analysis of the way people click and heatmaps and all that good stuff (though they do have their applications), I’m saying just a pure usability test from another perspective. Where do the new users look? Where do they click? What’s the first thing they want to go to? Are you putting them through information overload?

So take a step back, change shoes and take a fresh look at your app. No statistics, no heatmaps, no preconceived notions about the problem you are trying to solve (I know this is easier said than done). Just remember why you wrote your app in the first place. Try the passion on for size again and see if that doesn’t stir things up a bit.

Being Smart is all about Being Resourceful

Monday, January 25th, 2010

The internet us the ability to not while keeping up the appearance that we do. Now that’s not to say that you should be a know it all, but you should definitely know how and where to get information if you need it. If you use a specific open source technology at work, then you need to know how to support (because odds are, it was written by a few interested people and doesn’t have a company behind it). So you should know where the forums are, where the documentation is, where the mailing lists and the mailing list archives are, etc. Do they have an IRC channel where you can talk to live users who might be able to help on a more immediate basis? Maybe there was an even a book written that you can get your hands on, a PDF, or even a screencast. If you lucky, you might write a Tweet about your frustration and one of the products creators will answer (which happened to me recently).
(more…)

Converting From Subversion To Git

Monday, November 16th, 2009

Now that I have basically fallen for Git, I decided to finally move my Subversion repository over to Git (this way I can finally have a remote backup of it that I am comfortable with on Codaset).

The method for this was a lot more straightforward than I expected it to be. For the conversion tool, I used Nirvdrums fork of svn2git. It a feature complete version of the svn2git portion though the rest of it is still is development. Since it is a Ruby gem, getting it installed was a breeze. Just make sure that you have Ruby and rubygems installed.
(more…)

Remote Code Storage

Monday, November 9th, 2009

I was chatting with a friend of mine the other day about version control and why it’s necessary. So I decided to throw together a few options and a little explanation about why its important.

I have been using version control in some form or another for many years. I started with CVS, then moved to Subversion (which I still use quite a bit), and now, as my latest post about Git GUI’s on the Mac suggests, I have moved to Git. The one thing that has been consistent across every single transition has been that I had some sort of remote code storage every time. During the CVS days, I used a CVS pserver and stored my code locally and remotely for safety (and ease of checkout/deployment). For subversion, I always stored my code locally and used an apache install somewhere with a WebDAV module to get at and deploy whatever code is necessary.

Ultimately I use remote code storage for 2 reasons, back up my existing code base (so I have it in more than one place) and to have a visualization of what is going on in your project. That visualization is handy to be used as a central consistent view for multiple people (unlike a personal client which can be different per user).
(more…)

Designing Towards The User

Wednesday, May 13th, 2009

Any Systems Administrator who hasn’t heard of Tom Limoncelli should probably do some reading. His latest blog post ‘Gorillas in the Mist’ or ‘Sysadmins at the Keyboard’? over at Everything Sysadmin talks about how sometimes the time spent on designing a product or interface could have been better spent if the organization had just spoken to the people who will actually be *using* the systems.

Those of us that actually do the administering of systems and “grew up” without the GUI for the most part, feel more comfortable in the command line environment. Even when I have to fix something in Windows as simple as networking, the first thing I do is open up a command terminal and type ipconfig /renew. All the time that Microsoft spent developing the end user networking GUI was for nothing when dealing with a user like me. But then again, most users that use Windows aren’t like me. And the time Microsoft spent creating the interface was well spent.

The issues come in when someone like Cisco spends hundreds of thousands of dollars writing interfaces for something like the ASAs (which is actually an excellent GUI as far as GUIs go) and most people who deal with ASAs use the command line. I do most of my Cisco work directly using the command line within IOS. All the *nix machines I administer (which is actually quite a few more than I would like to think about at times), I don’t install any of the GUIs. I do everything via the trusty old command line and I know a lot of others do the same.

Even taking this so far as the development world. Even when I write code, I do so using vim on the command line and not an overkill IDE like Eclipse. Even the long time developers and engineers at my company use the command line when given the opportunity. Now this isn’t to say that GUIs don’t have their place, since they certainly do make some tasks, easier, faster, etc. But the fact remains that companies like Cisco will make these GUIs that costs them hundreds of thousands of dollars to develop/test/deploy/maintain, when the majority of the people that use it usually just want a solid debugging tool where they don’t have to keep clicking over and over (as Tom notes).

Recent Twitter Related Learning

Thursday, April 23rd, 2009

I’ve been using Twitter for a few weeks now and am starting to get used to some of the concepts. I have since also been reminded of a little of the RTFM concept mixed with the takes the experts with a grain of salt.

I started out reading some information given by Brent Ozar on his blog. These are 3 articles that kicked me off in the right direction:

  1. Twitter 101
  2. Top 10 Reasons I’m Not Following You On Twitter
  3. Top 10 Reasons I’m Following You On Twitter

That’s when I was reminded by Ryan Maplethat sometimes, depending on the people, those may be the opposite reasons. To be more specific, Ryan won’t follow people if they are consistently tweeting the fact that they have just put up a new blog post. (Although I am guilty of this every so often).

But I think the biggest element of Twitter that has since caught me off guard is the massive amount of information at ones disposal. For instance the yellow pages or wefollow.com can provide you with people to follow based on your interests. I have learned so much about things in my field just by following links that people re-tweet.

So if you are like I was and being a Twitter luddite, you may want to rethink it. I am consistently looking for more ways to make it useful too. So if you have something, let me know.

Checking For A DoS

Monday, November 17th, 2008

Working on groups of web servers, especially ones that are highly susceptible to attack, it is a good idea to have a string of commands that will allow you to check what is going on.

Check for DDos:

1
netstat -n | grep EST | awk '{ print $5 }' | cut -d: -f1 | sort | uniq -c | sort -nr | perl -an -e 'use Socket; ($hostname, @trash) = gethostbyaddr(inet_aton($F[1]), AF_INET); print "$F[0]\t$F[1]\t$hostname\n";'

Using this command will produce a list of hostnames that have a connect to the machine in an ESTABLISHED state. This is handy for creating a firewall rule either on the host (iptables, ipfw) or a little further away from the machine (at the edge router).

Check for web attacks:

1
cat eric.lubow.org-access_log.20081015 | awk '{print $1 }' | sort | uniq -c | sort -nr | head | perl -an -e 'use Socket; ($hostname, @trash) = gethostbyaddr(inet_aton($F[1]), AF_INET); print "$F[0]\t$F[1]\t$hostname\n";'

By using this command, you will get a hostname lookup on the IP sorted by total hit count descending. As when checking for DDos attacks, you can use this information to write firewall rules.

More web attack checks:

1
for i in `ls *.20081015 | grep -v error`; do echo "##### $i ######"; tail -n 10000 $i| awk '{print $1};' | sort -n | uniq -c | sort -nr | head -2; done

The difference between this check and the previous check is that this time, you may have a lot more logfiles to go through. I am also assuming that they are stored by .. They will print out which file its scanning and the top 2 issues from that file.

Referrer Check:

1
for file in `ls -lrS *access*20080525* | tail -n20`; do echo "==========" $file; gawk --re-interval -F'"' '{ split($4, myrt, "/");  split($0, myct); split(myct[3], myc, " "); if (length(myrt[3])==0) { myrt[3]="none"}; if (myrt[3] ~ /([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}/) { referrers[myrt[3]"/"myc[1]]++; } else { t=split(myrt[3], myrt2, "."); myref="*."myrt2[t-1]"."myrt2[t]; referrers[myref"/"myc[1]]++; } } END { for (referrer in referrers) { print referrers[referrer], referrer } }' $file | grep -v none | sort -n; done

This last check will get the referrer for a page from the logs and count up the number of times that exact referrer drives traffic to your page. Although this may initially appear to be only tangentially useful, if you are getting DDos, it may be hard to track down. Let’s say that you have some static content like a funny image and want to know why everyone is going to that image. Maybe your getting Dugg or ./ and this will help you tell (and find out what your page is so you can Digg yourself if you’re into that sort of thing).