We're choosers, not losers

March 29th, 2016

As I write this I have an iPhone 6S+ that is not working correctly because of decisions made by developers and maybe even some managers. Apps randomly lock up when I click on some links, but not all of them. It's a known issue and one of my friends who works at Apple (Hi Dave) has told me they are working on a fix.

Naturally I used Twitter to rant about it and said "developer laziness and shitty attitudes about testing is why my phone doesn't work correctly now." I stand by this statement 100% because I have seen time and time again what happens when deliberate choices about not having formalized, repeatable testing for your application are made. A might-as-well-be-broken phone is one of those.

As is expected, some people disagree with me. In fact, one person wrote a blog post accusing me of "test shaming" and outlined why they are currently in a position to not write tests.

Zach and I had a long private conversation about his blog post. We disagree on many of the points he made in his blog post, and I won't share all of what we talked about but I wanted to address some things.

My mother, as some of you know, is a retired high school teacher who taught special education courses to her students. She had a saying that her students hated to hear from her that she would use when people started complaining about their problems: "We're choosers, not losers".

Zach's blog post is full of the choices that he has made and is making but he is trying to frame it as if they are not choices at all but immutable constraints on his life. They are choices. Nothing more. Sometimes the choices you are presented with suck. But realize they are choices because that lets you focus on them in a different way.

The choices of his that I totally agree with are the ones where he is putting his family first. I never advocate just stomping out of a job with no backup plan. The plan I always advocate is "stay in your job while you look for something better". Don't mistake Zach as being lazy or some other nonsense. I just disagree with some of his other choices.

Programming is hard. Changing cultures is hard. Not everyone can do it. Hell, I don't do it all the time successfully. But I stay focused and try to make it work.

First, I have a day job where I am required to do what my boss tells me or else there are professional consequences. I don't work in academia, I'm not a technical manager, I am not some kind of testing architect. I work with teams of developers to figure out ways to write tests that give us high confidence that services that can be used by millions of people work correctly. The stakes are high and I like the challenge.

I have worked at exactly one place where I was told that tests were a waste of time. Not long after being told that I put into motion plans to leave. I left, I still have a good career, and that company is gone and dead. If you find yourself continually working for people like that, the problem is how you are deciding to take those jobs.

Zach asked me if I would turn down a job if it was awesome but they didn't do any testing. The answer is 100% yes. Lack of testing is usually the tip of the iceberg of suffering you will be slammed into at a job like that. Organizations that commit to testing tend to have other attributes that are extremely useful when shit goes wrong, which it will.

Testing is not a line item on a time card. It is something that you Just Do as part of programming. You are informally testing things any way, so why not hold onto those tests and make them more permanent. One of these days I need to talk to a manager who tells their developers that they are not allowed to write tests, if only to write their arguments down so I can refute them as the bullshit they are.

It is not career suicide to have strong opinions about the positive value of writing automated tests for code and be willing to suffer some short term pain because of it. I made it work because it's what I wanted and I was not afraid of any lasting consequences. Again, there has been no shortage of people who have asked me to come work for them because of these traits. If you care about these things, others will notice. They will ask you to come work for them too.

My commitment to testing since 2006 is not some kind of Pyrrhic crusade. It has not ruined my career -- in fact it has done quite the opposite.

The old argument about how testing doesn't add to the bottom line is one that I have shown to simply be not true. Bugs that make it into production cost more to fix than bugs found by tests when the developer is working on code. Failure to understand this is major error on the part of testing critics everywhere. If you are unable to quantify how much mistakes in production code cost you when compared to what your developers' time costs you, that's on you.

Whenever I switch jobs, I do not do it lightly. I too have a family and a long stretch of underemployment due to stubbornness would harm my family. But switching jobs frequently does not hurt your career. I've had 12 jobs in 18 years -- do you see me having problems getting new jobs and making more money with each position? This type of argument aimed at me ignores the early part of my career where I busted my ass working for people while building up the skills on my own time so I could do want I wanted. As if my early employers let me spend my work day writing books or preparing conference talks. All done after work.

Finally, when I talk to people about why they should test it is not from a throne of bones in an ivory tower. I have gone in there and worked hard to change cultures to be more open to testing. I have written countless tests and helped developers build their skills. I have worked hard to leave places in a better state than when I got there.

I am not asking for your approval. My message is not condescension but a reminder to take ownership of what you do and your choices. 10 years ago I did not imagine I would be in this position, but I do not regret anything that has happened along the way. There have been some bad choices, but there were choices I made with an understanding of the consequences to me, my family, and my career.

It's true the world isn't always the way we want it. The best way for it to stay that way is to choose to. We're not losers, we're choosers. I want you to choose a path that leads to success.

Marionette -- First Steps

February 29th, 2016

(I'm not sure I've ever done a post on a Leap Day...)

At Mozilla a lot of folks make use of automation tools in order to write tests. There's even an entire IRC channel devoted to discussions about it. As I get depper and deeper into my time at Mozilla I now have to think about how to use these tools to accomplish my testing goals. Like my post about using Docker I wanted to share my first steps in using Marionette, a set of automation tools that focus on driving a browser much in the same way Behat does. It's an essential tool for testing all the various versions of the FireFox broswers that Mozilla releases.

In this case I'm going to be highlighting the use of Marionette Driver. This is a Python library that allows you to control a browser that has support for Marionette built in.

As an aside, I find it very encouraging that the major browser companies are starting to build hooks right in to support tools that use the WebDriver API.

As the link to Marionette-enabled builds states, support for interacting with Marionette is in every recent (as of February 29, 2016) build of Firefox that is available to the public but is not turned on by default. To enable it, you will need to start Firefox from the command line and add a --marionette switch.

My examples were done on Mac OS-X El Capitan. Specific steps might be different for your environment. So let me run your through a very quick example of how Marionette does it's stuff.

First, I opened another terminal window and started up a copy of Firefox Developer Edition and started it up:

/Applications/FirefoxDeveloperEdition.app/Contents/MacOS/firefox --marionette

Once it started, there was a notice that it was ready and listening for connections on port 2828, which is the default. Next I proceeded to use Virtualenv to create a sandboxed environment for my code to run in. Once inside this new virtual environment I installed the Marionette driver using the version of pip that Virtualenv had thoughtfully installed:

pip install marionette_driver

With the Marionette driver installed, it was time to do a simple test to make sure everything was working. I fired up a Python interpreter (2.7.1) and tried to load a web page up the same way the old documentation for Marionette client

Here's a very simple example of how to use it:

Python 2.7.11 (default, Jan 22 2016, 08:29:18)
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import marionette_driver
>>> from marionette_driver import marionette
>>> client = marionette.Marionette(host='localhost', port=2828)
>>> client.start_session()
{u'rotatable': False, u'raisesAccessibilityExceptions': False, u'takesScreenshot': True, u'acceptSslCerts': False, u'appBuildId': u'20160225004014', u'XULappId': u'{ec8030f7-c20a-464f-9b0e-13a3a9e97384}', u'browserVersion': u'46.0a2', u'specificationLevel': u'1', u'platform': u'DARWIN', u'browserName': u'Firefox', u'version': u'46.0a2', u'device': u'desktop', u'proxy': {}, u'platformVersion': u'15.3.0', u'takesElementScreenshot': True, u'platformName': u'Darwin'}
>>> client.execute_script("alert('o hai there!');")
>>> client.navigate("http://www.mozilla.org")
>>> client.get_url()
>>> from marionette_driver import By
>>> first_link = client.find_element(By.TAG_NAME, "a")
>>> first_link.click()

What did I do?

  • loaded the Marionette-driver library
  • from that library I wanted to use some functionality that's part of marionette
  • create a client that's connected to a browser running on localhost and port 2828
  • start a session
  • cause the browser to execute some arbitrary Javascript (an alert in this case)
  • navigate to a specific page
  • verify the URL
  • grab a helper for identifying elements in a page
  • find the first a tag on the page
  • click that link

I am just at the beginning of my work using Marionnette (how the heck can I click on things that are part of the brower itself and not on the HTML page). Hope this little example helps you get started too.

Containers And The Grumpy Tester

February 3rd, 2016

Even with my personal focus on promoting Test-Driven Development I still end up doing a lot of functional testing. For clarification purposes I want to define a functional test as one where I create an automated test that treats the system I'm testing as a black box.

In this case I needed to write some functional tests for Kinto. Kinto is a lightweight storage server that is designed to be distributed-friendly and is being used at Mozilla in production RIGHT NOW to handle the SSL certificate revocation list that Firefox uses.

Being a big fan of automation, I started to brainstorm ideas on what the ideal environment for running these tests would look like. So I set on the following ideas:

  • we should be able to easily start up however many instances of Kinto we need
  • the test script itself needs to know if it's own dependencies are all set
  • someone other than me needs to be able to easily use this tool

Docker as a new QA tool

Before we go any further, please go and buy Chris Tankersley's awesome book about Docker. Chris is a friend of mine (and current PHP Magic: The Gathering champion) and his help in going from knowing nothing about Docker to knowing enough to create something useful was invaluable.

For the uninitiated, I will give a gross over-simplification of what Docker is. Docker is a set of tools that allow you to create small containers inside which you can run applications. The theory behind this is that is allows you to run multiple applications on the same server. I'm not sure how it works it's magic on Linux-based systems but on OS-X the tools have you using a Vagrant VM to host all these containers for you. I'm not always comfortable with using tools that appear to be magical to me, but I'm okay with what Docker is doing.

In many ways this reminds me of a tool that FreeBSD provided called jails. Back in 2003-2004 the company I was working for gave us development boxes that worked on that used jails to simulate what our production environment looked like. I thought it was a very interesting bit of technology that solved a real problem -- how to provide developers with a solid development environment.

Lucky for me the Kinto project already has a Docker image we can use so it seems like a natural thing to try and use. After some conversations with Mr. Tankersley it appeared what I needed was to use Docker Compose. Docker Compose lets you start multiple containers at once, create links with them, and do all sorts of other interesting things with them.

Initially I had a grand plan for using Docker Compose. I was going to spin up containers for two different Kinto instances and then two PostgeSQL servers and then they can talk to each other and then it's going to be awesome and I will look like a goddamn genius!

Like so many of plans, things started off super complicated and then eventually got pared down to what I really needed. I ran into all sorts of problems with my initial scheme because I ended up with basically what amounted to a race condition happening.

I need to spin up the database containers FIRST and then run some database migrations and OH MY GOD WHY IS THIS ALL SO COMPLICATED.

After banging my head unsuccessfully against this problem, I took a step back and figured out what it was I really needed to create this environment. After calming down and telling imposter syndrome to hit the road, I took a closer look at what the Kinto containers were doing and realized it was fine to use the default of creating a small in-memory database for storing things.

This is a test that is designed to be run in a continuous integration environment so it doesn't really need any permanence. So with that issue out of the way, I tweaked my Docker Compose configuration file until I was happy with it:

  image: kinto/kinto-server
   - "8888:8888"
  image: chartjes/kinto-read-only
   - "8889:8889"

I created a custom Docker image for this. I suppose it has a terrible name because it's not really a read-only instance but it's playing the role of the "only read by Firefox" side of the testing environment.

Do when you run docker-compose in the directory with this docker-compose.yml file, it will spin up two containers that are running two different Kinto servers.

Semi-intelligent testing scripts

Next up was to write some tests. Right now we do our tests in Python using the awesome pytest testing tool. I wanted to make sure that the test would gracefully fail if our Docker containers weren't up and running so I hacked together some code that goes in the setup method for the test.

def setUp(self):
    # Figure out what the IP address where the containers are running
    f = os.popen('which docker-machine')
    docker_machine = str(f.read()).strip()

    if string.find(docker_machine, "/docker-machine") == -1:
        print("Could not find docker-machine in our path")

    f = os.popen('%s ip default' % docker_machine)
    ip = str(f.read()).strip()

    # Set our master and read-only end points and create our test bucket
    self.master_url = "http://%s:8888/v1/" % ip
    self.read_only_url = "http://%s:8889/v1/" % ip
    self.credentials = ('testuser', 'abc123')
    self.master = Client(server_url=self.master_url, auth=self.credentials)
    self.read_only = Client(server_url=self.read_only_url, auth=self.credentials)
    self.bucket = self.get_timestamp()

As with all code examples I put up here, I'm open to feedback and corrections.

Time for that test

The scenario I'm going to share is a very simple one that accurate duplicates a use case: someone alters the collection of data and those changes need to get replicated over to a different server.

The test should make sense because I made sure to add comments

def test_sync(self):
    # Generate some random records
    collection = self.get_timestamp()
    self.master.create_collection(collection, bucket=self.bucket)
    self.read_only.create_collection(collection, bucket=self.bucket)
    for x in range(10):

    # Pause and generate some more random records on the master end-point
    for x in range(5):

    # Get the timestamp of our last record by doing an HTTP query of the
    # read-only collection and grabbing the value from the header
    response = self.read_only.get_records(bucket=self.bucket, collection=collection)
    last_record = response[-1]
    since = last_record['last_modified']

    # Query the master using that value for all the records since that one
    new_records = self.master.get_records(bucket=self.bucket, collection=collection, _since=since)

    # Add those records to our read-only end-point
    for data in new_records:
        new_data = {'internal_id': data['internal_id'], 'title': data['title']}
        self.read_only.create_record(data=new_data, bucket=self.bucket, collection=collection)

    master_records = self.master.get_records(bucket=self.bucket, collection=collection)
    read_only_records = self.read_only.get_records(bucket=self.bucket, collection=collection)

    # We should have 5 records in master and 15 in read-only
    self.assertEquals(5, len(master_records))
    self.assertEquals(15, len(read_only_records))

    # Clean up our collections
    self.master.delete_collection(collection, bucket=self.bucket)
    self.read_only.delete_collection(collection, bucket=self.bucket)

Again, very straight forward. Like I've told people many times -- writing tests is just like writing code, and the test code doesn't need to be fancy. It just needs to accurate execute the test scenario you have in mind.

Always Be Evaluating

As a tester I'm always looking for tools that I think can provide real value to me and help with testing scenarios. It's still early days with Docker and it (along with associated tools) are only getting better. If you've been struggling with a way to try and build a reasonably-sandboxed environment to run functional tests in, I encourage you to take a look at what I've done here and copy it to your advantage.

The tests I've been working on can be found inside this GitHub repo