Mongo migration

For the past few months I’ve been at a terrific job, doing devops at a small SaaS company. Real quick, SaaS means “Software as a Service” & refers to companies that have a webapp that they either sell access to and/or set up a version of for their customers. There are a lot of challenges with doing devops for a company like this, trying to find the balance between the heavyweight solutions and the latest and greatest to find what’s right for us, all the while (personally speaking) doing a LOT of learning on the topic. That’s not to say that heavyweight versus the latest&greatest are opposed; there are a few more weights on that spinning disk, not the least of which is “what we were doing before was …”.

So what I’ve been working on for the last few weeks, somewhere between the old solution & the new hotness, has been a Mongo problem. We deal in data that must be scrubbed before we analyze it. So the way that works, is that the host captures data, then scrubs ALL data there, and then sends it on to our long-term storage database, and then all local data on that host is removed after a couple days. What we’ll do with all of this in five, ten years will hopefully be the subject of another post, but for now we are only dealing with about 30GB of data in the long-term storage DB, collected over the last couple years. Let’s call that “Storeo,” and the hosts that they come from “partner databases,” which is true enough.

We’ve developed a couple of schemas for Storeo, and we only upgrade our partners from one to the next with code releases. So we have a couple old versions of Storeo kicking around. The next piece of this story is that we have an analytics dashboard set up for each partner, which pulls from Storeo, based on a domain field in the data we get from each partner. There’s one for each version of Storeo that they (and we) have to refer to, which means multiple dashboards just to get all the info! So that’s foolish, yeah? As a result, a previous engineer wrote a Mongo migration script to migrate all data from version 1 to 2, and then from version 2 to 3, the current version. So there are two steps to this – first, to migrate all the legacy data up to the current version so everything can be analyzed in the same way, and second, to do this regularly so even if partners are using older versions, we roll that data up so there is ONE source of truth for all their data.

As happens occasionally, no one can quite remember how I got this project, but it’s been a ride. Mostly good, occasionally “how the hell does Mongo even work?”. Some of the problems I’ve gone through have been of a Mongo nature, some of them of a sysadmin nature, some of them just basic DBA. Many of these steps might make you scream, but I’m cataloguing them because I want to try to get down what all I’ve done and learned. When you are self-taught, your education comes in fits and starts and in no particular, and in sometimes infuriating (out of) order. So I’m going to do my best to show you all the things I did wrong, too.

Problem 1 – Where to Test

I wanted to test the migration locally, not on the production Storeo server, which continues to receive data from all our partner database. First, I fired up the mongodump docs and tried that. Well, I nearly immediately ran out of room, and deleted that dump/ directory with those contents. When I looked around with a df -h /, a command which shows you the disk file size on root, human-readable, the output was that there were only a couple gigs left. Well, I knew that dumping a 15GB database wasn’t going to work locally. So I investigated a lot of other options, like sending the mongodump to another server (technically possible), SSHing into the server but sending all dumped data to my local machine with plenty of space on it. This probably took a couple days of investigation between other tasks.

None of this really panned out (but I still think it should have), and my boss let me know that there’s a 300GB volume attached to Storeo, and I said, wait, but I didn’t see that, I looked for something like that, and they gently let me know not to give df any arguments in order to see all disks mounted on a server. With that, a df -h showed me the 300GB volume, mounted on /var/lib! Excellent. On a practical note, it’s extremely sensible to have all the data for your application stored on a volume rather than on some enormously provisioned server. When you use AWS, one volume is much the same as the next, so putting databases on their own volumes is pretty sensible. Keep your basic server’s disk very bare bones, put more complex stuff on modular disks that you can move around if you need.

So with that!! I made a directory for myself there to separate from the production stuff, confirmed that mongodump/mongorestore do NOT interrupt read/write operations, and made mongodumps of versions 1, 2 and 3. This took.. maybe an hour. Then, because they were still quite large (Mongo is very jealous of disk space), I tarballed & gzipped them to reduce them down to half a gig or so. We use magic-wormhole all the time at work (available with a quick pip install magic-wormhole [assuming you have Python and pip installed {but it doesn’t have to be just a Python thing, just like I use ag and that’s a super Perl-y tool}]) so I sent these tarballs to my local machine, untarred/ungzipped, and mongorestored to the versions of Storeo 1, 2, & 3 that I have locally to run our app on my own machine. This probably, with carefulness and lots of reading, took another couple hours. At this point we’re probably a week in.

Problem 2 – How to Test

At this point, I finally started testing the migration itself since everything was a safe copy and totally destructible. Also I retained the tarballs in case I ended up wanting to drop the database or fiddle with it in some unrecoverable way. I took a count of the documents being migrated, and of the space taken up by each DB (which was different than on prod – I thought until this week that those sizes should be constant from prod-mongodump-tarball-mongorestore, but that’s not true – apparently most databases are wiggly with their sizing). The migration script is a javascript script (how do you even say that) that you feed into mongo like so mongo migration1-to-2.js, within which you define dbSource and dbTarget. The source, in this case, is version 1 of Storeo, and the target is version 2. Each of these is a distinct database managed by Mongo. With great trepidation, I did iiiit. Ok, I’ve left a piece out. I, um, didn’t know how to run JS. Googling said “oh just give the path to the browser!” so I did and, uh – that didn’t work. You may be saying “Duh.” Look, I’ve never done any front-end at all, and have never touched javascript outside that Codecademy series I did on here a couple years back. With my tail between my legs I asked my boss again, & was told about the above, just mongo filename.js.

The script took three hours!! Gah! So I ran the next one, which took SEVEN (since it contained everything from the first one, too), and regular attention to the ssh session so I didn’t lose the process (don’t worry, linux-loving friends, I’ll get there, just keep reading). These two migrations took two business days. At this point, we started talking to the team who manages the data analysis dashboards for our partners to talk about some of the complexities. Because a) this isn’t a tool from Mongo, there are no public docs on it and b) you can only test Storeo performance after the data has been scrubbed and sent, even locally, we decided to set up a few demo servers to point to test versions of the database.

Remember the volume attached to Storeo on production? Whoo! I logged onto Storeo and learned a ton more about mongodump & mongorestore, and made teststoreo1, teststoreo2, and teststoreo3, exact mongodump/restore copies of versions 1, 2 & 3 of Storeo. Their sizes, again, were different, but we’ve learned that that’s ok! Mongo has a lot of guarantees, space management isn’t one of them, so pack extra disk and we’ll be fine. So because this took a lot of googling and careful testing, because the last thing I wanted to do was mongorestore back into the place I’d mongodumped from – at the time I wasn’t sure if mongorestore overwrites the disk entirely, and wanted to be cautious versus potential lost data. So, make the directory, mongdump into it while specifying the database. Then restore into a new database (with the same name as the directory you’ve just made – this isn’t mandatory but made it easier to trace) while feeding it the path where the mongodump lives.

mkdir teststoreo1 # make the directory
mongodump -d storeo1 teststoreo1/ # dump the database with the name storeo1 into the dir we just made 
... # this takes some time, depending of course on the size
mongorestore -d teststoreo1 teststoreo1/storeo1 # there could be a dump/ in front of this end path

So after doing this for the other two Storeo databases as well, a show dbs command in the Mongo shell outputs all three production Storeos, as well as all three test Storeos. This meant we were in a good place to do some final testing. There were a few more meetings assessing risk and the complexity of all the pieces of our infrastructure that touch Storeo, how you do. Because the function of Storeo is to continually take in stripped data, I had to ensure that we weren’t going to lose information being sent during the migration. Because it’s not an officially supported tool but instead something that we wrote in-house, and I hadn’t been able to find a tool that moves data from one mongo DB to another, it’s hard to know what will and won’t impact production, so I set up one of our demo servers to send its stripped data to teststoreo1, and then kicked off the migration from teststoreo1 to teststoreo2 to make sure there was no data loss. On that demo server, while the migration was migratin’, I made a bunch of new dummy data that I’d be able to trace back to this demo server. A few hours later, when the 1-to-2 migration was complete, sure enough there were a handful of documents in teststoreo1 that were new – they’d been held & NOT sent! With this, I was very happy with the migration script.

So I kicked off the following script with mongo migrate1-2.js, quit the process with ctrl-z, and put it in the background (after identifying it as job 1) with bg %1, so it wouldn’t be interrupted by my leaving the session (see?)..

'use strict';

var dbSource = connect("localhost/storeo1");
var dbTarget = connect("localhost/storeo2");

// The migration process could take so long that new documents may be created
// while the script is still running. We will move only the ones created
// before the start of the process
var now = new ISODate();

dbSource.collection_1.find().forEach(function(elem){
    elem.schemaVersion = 2; // this means each element is given the NEW schema version
    dbTarget.collection_1.insert(elem);
});

dbSource.collection_2.find({createTime: {$lt: now}}).forEach(function(elem){
    elem.schemaVersion = 2;
    dbTarget.collection_2.insert(elem);
});

dbSource.collection_3.find({timestamp: {$lt: now}}).forEach(function(elem){
    elem.schemaVersion = 2;
    dbTarget.collection_3.insert(elem);
});


dbSource.collection_1.remove({}); // this collection did not have a timestamp
dbSource.collection_2.remove({createTime: {$lt: now}});
dbSource.collection_3.remove({timestamp: {$lt: now}});

The second script was the same but for the definitions of dbSource and dbTarget to storeo2 and storeo3, respectively. As with the testing, the first one took about three hours, the second, seven. With each one, I kicked it off, then put it in the background, then checked on it… later. Because it’d been backgrounded (that’s a verb, sure), it wasn’t quiiiiite possible to tell when it was done. That could be fixed with some kind of output at the end of the script, but that’s not how I did it!

Then I set up a lil cron job there at the end to regularly move data from 1 to 2, and once that had run for the first time, then I set up the second cron job to move it from 2 to 3.

Who wants to talk about Mongo????????

Advertisement

Objects in Javascript

In Javascript, there are many ways, it seems, to create, instantiate, and edit objects. As part of my ongoing (noble) quest to make clear these curious structures, I want to create a resource for myself (and friends!) for how to manipulate objects in Javascript. Then I want to go back and compare how Python handles objects & what you can and can’t do with either language. MASTERY!

Ways to Make Objects

1

var rachelsObjectLiteral {
    objectColor: "red",
    objectSize: "friggin' huge",
    rachelsMethod: function (parameterToCall) {
        console.log("this is a 'hidden' function or method of the object
        rachelsObjectLiteral, calling " + parameterToCall + ".  the way to 
        call this Method is only via the object name, so 
        'rachelsObjectLiteral.rachelsMethod(argumentToCall)");
    },
    var rachelsOtherMethodMadePlain = function () {
        console.log("and this one you can just call outside the object
        without needing to reference rachelsObjectLiteral.  sounds like
        python's global.");
    }
};

rachelsObjectLiteral["objectHappiness"] = "really happy"

So as the object name announces, this is the Object Literal way to object creation. All of these things can be accessed via rachelsObjectLiteral.attribute, for example rachelsObjectLiteral.objectColor would refer to “red.” And a method is a function inside of an object. It’s hidden because you can’t call rachelsMethod without the object name. Also note that in an object literal construction, it’s COMMAS that go after each attribute (but not the final one), not semicolons.

And also note that I can add an attribute super easily here, e.g. "objecthappiness".

2

function objectConstructor (objectAttribute1, objectAttribute2) {
    this.objectAttribute1 = objectAttribute1;  # as someone helped me with this is comparable
    this.objectAttribute2 = objectAttribute2;  # to python's self.attributeName = attributeName
    this.objectMethod = function(parameter) {
        return parameter;
    };
}

var constructee = new objectConstructor (27, "floppy");

I really like this method of object creation. Name the superstructure, give it the attributes its constructed objects will eventually have, and then make a var as a new version of that superstructure. It’s just so easy to construct objects once you’ve created the structure!

Next we’ve got what seems to be a sort of pared down version of the second method, an object which at first has no attributes and whose attributes are assigned with keys, one by one. This second type of Constructor-style reminds me a lot of Pythonic dicts:

3

var newThing = new Object();
newThing["attribute1"] = "happy go shiny";
newThing["attribute2"] = "here's the second one!";

A really small version of the method three is var objectName = {}; and attributes can be assigned in the same way, or with dot notation, à la objectName.attribute3 = "hooray!";.

Finally, we’ve got the prototype of objects. Once an object’s been created, if you want to add more attributes you can do as we did in example 1, AND you can use prototype like so!

function Task (taskName) {
    this.taskName = taskName;
};

var coding = new Task("writing Python");

Task.prototype.checkOver = function() {
    return "you did it right, right?";
};

coding.checkOver();

var cleaning = new Task("tidying");

cleaning.checkOver();

Prototyping is probably the most challenging piece of this language for me, and along with that, it’s likely one of the most powerful. Ok! I will be coming back to this post probably constantly, hopefully you find it useful too!

Javascript cribsheetery

So to declare either variables OR functions (and I assume, later, objects? unless it turns out they’re the same – they might be the same), it begins with var variableName = and if it’s a simple variable, it behaves predictably, and moves on to var variableName = (5 * 2). If it’s a function, this entire lovely curly brace structure needs to be set up:

var functionName = function (parameter) {
    console.log(parameter * 2);
};

and then to call:

functionName(5)

where 5 is the parameter of the function functionName.

Ok. Rock and roll. Movin’.

Javascript first thoughts, mackatoots, & tutorialino

Hello again! I have been working and working and working, lately, and I have a few things up my sleeve!

First, my company, The Open Bastion, is working on tutorials in Python. Get in touch if you’d like to create with us! Note: paying gig 🙂

Next, though I know some Python, it seems that I need to keep going, so I am learning Javascript, now. Exciting! Something altogether new! So far, it seems like a friendyfriendly C analogue, based on the 50-odd pages of a C++ book I read back in the day, which makes web site and app creation a BREEZE. I know complications will come – I feel like I hear more complaints about Javascript than any other language, right now, but as with all new languages, I’m really enjoying learning the syntax. I’m doing the Codecademy interactive tutorial and it’s adorable and fast fast fast, so I feel like I am getting a lot done and learning learning! Note to self (and to youuuu!) to research later – why are semicolons sometimes required at the ends of lines & sometimes not? I wonder – is it more than just good style? With Cx semicolons are quite explicitly required, right?

Next, I’m working on a Mac, now. If you know me, you’ll know that this is, like, a huge deal for me. It’s just a work machine, AND it’s four years old, buuuuut it’s great for coding even if it’s comically bad at other things, i.e. file management (honestly does anybody EFFECTIVELY use Finder? my theory is that apple wanted to ensure that their file management system was different than windows’ [whose Explorer {directories, not internet} is superior to all others] and now they’re too bashful to back out???). The mouse pad, though, is a serious delight to use, the best touch pad I have ever operated on, and the keys are lovely to type on. It’s strange having no optical drive (it’s an Air) and the minimal number of USB ports can be frustrating, but of course, a $1300 machine is a $1300 machine, and a professional versus hobbyist (read: my perfectly delightful $600 2.5y/o windows/ubuntu machine) computer is going to be a vastly different user experience. I hate the fanboyism around macintoshes so, so much, though, that I probably will still never buy one on my own, but it seems you need to be comfortable with them to work in tech, at least in Portland. So here I am. A bit blech, a bit “oh this works really well.”

Lastly, I’m volunteering with a group called Chick Tech right now, which assigns mentors in industry to young women. It’s a very cool organization, and I can already see myself staying involved with them for a long time. I worry about this approach to solving one of tech’s many diversity problems, though – the problem, at this point, is not to interest young women in STEM, but to keep grown, adult women in tech once we are here. And here, we are, and I/we will demand equality. Jessica McKellar and Selena Deckelmann, a couple of fabulous tech luminaries whom I follow, both refer to the “leaky pipeline*,” – we can get them in, but we can’t keep them there. I think the following tweet from Jenny Thurman sums up the issue quite well.

So I will continue to think about this while I have a blast with my mentee, a motivated young lady whose trajectory I am excited to follow and encourage.

* the earliest reference I have found to this particular use of this term is from a paper from 1996, found here.