Posts Tagged ‘ database ’

What’s So Great About Cassandra’s Composite Columns?

Tuesday, August 7th, 2012

There are a lot of things I really like about Cassandra. But one thing in particular I like in creating a schema is having access to composite columns (read about composite columns and their origins here on Datastax’s blog). Let’s start simple with explaining a composite columns and then we can dive right into why they are so much fun to work with. (more…)

ec2-consistent-snapshot With Mongo

Thursday, April 21st, 2011

I setup MongoDB on my Amazon EC2 instance knowing full well that it would have to be backed up at some point. I also knew that by using XFS, I could take advantage of filesystem freezing in a similar fashion to LVM snapshots. I had remembered reading about backups on XFS with MySQL being done with ec2-consistent-snapshot. As with any piece of open source software, it just took a little tweaking to make it do what I wanted it to do.
(more…)

New Massachusetts Security Law Passed For Databases

Tuesday, April 27th, 2010

In case you haven’t heard about the new Massachusetts state law regarding consumer or client information in databases, you can read about it here, at Information Week, or just Google for “Massachusetts data security law”. And if you haven’t read about, then I strongly suggest you do. This is one of those instances where I believe their heart is in the right place, even if the execution/implementation wasn’t perfect.
(more…)

Model Specific Formatted Search Results Using Thinking Sphinx

Monday, February 8th, 2010

Having recently implemented Thinking Sphinx on one of my web sites, I thought it would be cool to be able to search every indexed model. With Thinking Sphinx, it’s easy to have a bunch of different classes returned in the results. The tougher part is displaying them in a way that is organized (although admittedly not very DRY).
(more…)

When To Use MySQL Cursor Classes In Python

Monday, January 18th, 2010

I have been writing a lot of code that has been interacting with MySQL lately. Sometimes I find it easier to work the result set in a dictionary form and other times it is easier with an array. But in order to not break all your code, it is necessary to set a default cursor class that keeps your code consistent. More often than not, I find using using a arrays is easier since I just want quick access to all the retrieved data. I also end up making my SELECT calls while specifying the columns and order of the columns I want returned.

The reason that using cursor classes is handy is because Python doesn’t come with a mysql_fetch_assoc like PHP or selectrow_hashref like Perl’s DBI interface. Python uses cursor dictionaries to bridge this gap. Ultimately your result is the same. But as with Perl and PHP, defaulting to cursor dictionaries isn’t a good idea for larger datasets because of the extra processing time and memory required to convert the data.
(more…)

Migrations Without belongs_to Or references

Wednesday, October 14th, 2009

Normally when do a database migration in Rails, when adding ownership from a model to another model, you use the concept of belongs_to or references:

1
2
3
4
  create_table :comments do |t|
    t.belongs_to :user
    t.references :post
  end

Interestingly enough, these methods are only available during the initial table creation. If you want to add a reference to a model that is created later, you have to do it the old fashioned way, by just adding a column:

1
   add_column :comments, :group_id, :integer

Doing it this way is clean, easy, and definitely meets the KISS principle. But I do find it interesting that one can’t add an association later in the game. Sometimes the Rails way is just KISS and adding the column by hand.

Tokyo Tyrant and Tokyo Cabinet

Friday, October 9th, 2009

Tokyo Tyrant and Tokyo Cabinet are the components for a database used by Mixi (basically a Japanese Facebook). And for work, I got to play with these tools for some research. Installing all this stuff along with the Perl APIs is incredibly easy.

Ultimately I am working on a comparison of Cassandra and Tokyo Cabinet, but I will get to more on Cassandra later.

Ideally the tests I am going to be doing are fairly simple. I am going to be loading a few million rows into a TCT database (which is a table database in TC terms) and then loading key, value pairs into the database. The layout in a hash format is basically going to be as follows:

1
2
3
4
{
      "user@example.com" => {   "sendDates" => {"2009-09-30"},   },
      "123456789" => {  "2009-09-30" => {"2287"}   },
}

I ran these tests in the following formats for INSERTing the data into the a table database and as serialized data in a hash database. It is necessary to point out that the load on this machine is the normal load. Therefore it cannot be a true benchmark. Since the conditions are not optimal (but really, when are they ever), take the results with a grain of salt. Also, there is some data munging going on during every iteration to grab the email addresses and other data. All this is being done through the Perl API and Tokyo Tyrant. The machine that this is running on is a Dual Dual Core 2.5GHz Intel Xeon processor with 16G of memory.

For the first round, a few things should be noted:

  • The totals referenced below are email address counts add/modified in the db
  • I am only using 1 connection to the Tokyo Tyrant DB and it is currently setup to handle 8 threads
  • I didn’t do any memory adjustment on startup, so the default (which is marginal) is in use
  • I am only using the standard put operations, not putcat, putkeep, or putnr (which I will be using later)

The results of the table database are as follows. It is also worth noting the size of the table is around 410M on disk.

1
2
3
4
5
6
7
8
9
10
[elubow@db5 db]$ time ./tct_test.pl -b lists/ -D 2009-09-30 -c queue-mail.ini
usa: 99,272
top: 3,661,491
Total: 3,760,763

real    291m53.204s
user    4m53.557s
sys     2m35.604s
[root@db5 tmp]# ls -l
-rw-r--r-- 1 root root 410798800 Oct  6 23:15 mailings.tct

The structure for the hash database (seeing as its only key value) is as follows:

1
2
      "user@example.com" => "2009-09-30",
      "123456789" => "2009-09-30|2287",

The results of loading the same data into a hash database are as follows. It is also worth noting the size of the table is around 360M on disk. This is significantly smaller than the 410M of the table database containing the same style data.

1
2
3
4
5
6
7
8
9
10
[elubow@db5 db]$ time ./tch_test.pl -b lists/ -D 2009-09-30 -c queue-mail.ini
usa: 99,272
top: 3,661,491
Total: 3,760,763

real    345m29.444s
user    2m23.338s
sys     2m15.768s
[root@db5 tmp]# ls -l
-rw-r--r-- 1 root root 359468816 Oct  7 17:50 mailings.tch

For the second round, I loaded a second days worth of data in to the database. I used the same layouts as above with the following noteworthy items:

  • I did a get first prior to the put to decide whether to use put or putcat
  • The new data structure is now either “2009-09-30,2009-10-01” or “2009-09-30|1995,2009-10-01|1996”

Results of the hash database test round 2:

1
2
3
4
5
6
7
8
9
10
11
[elubow@db5 db]$ time ./tch_test.pl -b lists/ -D 2009-10-01 -c queue-mail.ini
luxe: 936,911
amex: 599,981
mex: 39,700
Total: 1,576,592

real    177m55.280s
user    1m53.289s
sys     2m8.606s
[elubow@db5 db]$ ls -l
-rw-r--r-- 1 root root 461176784 Oct  7 23:44 mailings.tch

Results of the table database test round 2:

1
2
3
4
5
6
7
8
9
10
11
[elubow@db5 db]$ time ./tct_test.pl -b lists/ -D 2009-10-01 -c queue-mail.ini
luxe: 936,911
amex: 599,981
mex: 39,700
Total: 1,576,592

real    412m19.007s
user    4m39.064s
sys     2m22.343s
[elubow@db5 db]$ ls -l
-rw-r--r-- 1 root root 512258816 Oct  8 12:41 mailings.tct

When it comes down to the final implementation, I will likely be parallelizing the put in some form. I would like to think that a database designed for this sort of thing works best in a concurrent environment (especially considering the default startup value is 8 threads).

It is obvious that when it comes to load times, that the hash database is much faster. Now its time to run some queries and see how this stuff goes down.

So I ran some queries first against the table database. I grabbed a new list of 3.6 million email addresses and iterated over the list, grabbed the record from the table database and counted how many dates (via array value counts) were entered for that email address. I ran the script 4 times and results were as follows. I typically throw out the first run since caching kicks in for the other runs.

1
2
3
4
Run 1: 10m35.689s
Run 2: 5m41.896s
Run 3: 5m44.505s
Run 4: 5m44.329s

Doing the same thing for the hash database, I got the following result set:

1
2
3
4
Run 1: 7m54.292s
Run 2: 4m13.467s
Run 3: 3m59.302s
Run 4: 4m13.277s

I think the results speak for themselves. A hash database is obviously faster (which is something most of us assumed from the beginning). The rest of time comes form programmatic comparisons like date comparisons in specific slices of the array. Load times can be sped up using concurrency, but given the requirements of the project, the gets have to be done in this sequential fashion.

Now its on to testing Cassandra in a similar fashion for comparison.