Tracking Test Cases

August 22nd, 2016

Having done unit tests for so long, there is a tendency to forget that when you start doing QA work you need to make sure your test cases are recorded somewhere so they can be found by others later. The test code you write is always there in the unit test! Hopefully it makes sense to others.

Since I do a QA role, I'm not touching anyone else's unit tests. I end up using a combination of automation (always in Python and mostly with pytest) and manual testing. Things like API contract tests (did they add new fields to their API without telling me?!?) or scripts written by developers to verify things like "are the cryptographic signatures for this data still correct".

At Mozilla we use Bugzilla to track pretty much everything that we do. Pretty much all of my testing happens when one of the projects I do QA work for is ready to do a new release. A "bug" is created, I am assigned to it as the QA contact and away we go with the process to get code approved by QA for deployment to production.

For example, here's me running some scripts and reporting the results along with a question:

QA feedback

(Turns out I can only run that xml-verifier script in our staging environment)

We use virtualenv a lot to create small sandboxed environments to run our tests in, and the developers on the Kinto Project have been very good at creating small little tools for me to use to help with testing things.

So not only did I have to test that all the signatures remain the same, I also needed to do some manual testing to ensure the admin UI they created works as expected.

We are starting to use TestRail to track our software test cases. One of the goals is to use it's API and make calls to it as part of test runs -- we're in the early stages, but that is something I am working on this upcoming week.

Here's an example of me outlining the manual process of adding some fonts to the collection that Fennec uses.

Test cases for Kinto Writer

Later in the same "bug" about deploying Kinto, I added my notes on doing the manual testing for uploading fonts.

Manual testing passed

Python + Pytest + Virtualenv + Bugzilla + TestRails == my tools for testing.

We're choosers, not losers

March 29th, 2016

As I write this I have an iPhone 6S+ that is not working correctly because of decisions made by developers and maybe even some managers. Apps randomly lock up when I click on some links, but not all of them. It's a known issue and one of my friends who works at Apple (Hi Dave) has told me they are working on a fix.

Naturally I used Twitter to rant about it and said "developer laziness and shitty attitudes about testing is why my phone doesn't work correctly now." I stand by this statement 100% because I have seen time and time again what happens when deliberate choices about not having formalized, repeatable testing for your application are made. A might-as-well-be-broken phone is one of those.

As is expected, some people disagree with me. In fact, one person wrote a blog post accusing me of "test shaming" and outlined why they are currently in a position to not write tests.

Zach and I had a long private conversation about his blog post. We disagree on many of the points he made in his blog post, and I won't share all of what we talked about but I wanted to address some things.

My mother, as some of you know, is a retired high school teacher who taught special education courses to her students. She had a saying that her students hated to hear from her that she would use when people started complaining about their problems: "We're choosers, not losers".

Zach's blog post is full of the choices that he has made and is making but he is trying to frame it as if they are not choices at all but immutable constraints on his life. They are choices. Nothing more. Sometimes the choices you are presented with suck. But realize they are choices because that lets you focus on them in a different way.

The choices of his that I totally agree with are the ones where he is putting his family first. I never advocate just stomping out of a job with no backup plan. The plan I always advocate is "stay in your job while you look for something better". Don't mistake Zach as being lazy or some other nonsense. I just disagree with some of his other choices.

Programming is hard. Changing cultures is hard. Not everyone can do it. Hell, I don't do it all the time successfully. But I stay focused and try to make it work.

First, I have a day job where I am required to do what my boss tells me or else there are professional consequences. I don't work in academia, I'm not a technical manager, I am not some kind of testing architect. I work with teams of developers to figure out ways to write tests that give us high confidence that services that can be used by millions of people work correctly. The stakes are high and I like the challenge.

I have worked at exactly one place where I was told that tests were a waste of time. Not long after being told that I put into motion plans to leave. I left, I still have a good career, and that company is gone and dead. If you find yourself continually working for people like that, the problem is how you are deciding to take those jobs.

Zach asked me if I would turn down a job if it was awesome but they didn't do any testing. The answer is 100% yes. Lack of testing is usually the tip of the iceberg of suffering you will be slammed into at a job like that. Organizations that commit to testing tend to have other attributes that are extremely useful when shit goes wrong, which it will.

Testing is not a line item on a time card. It is something that you Just Do as part of programming. You are informally testing things any way, so why not hold onto those tests and make them more permanent. One of these days I need to talk to a manager who tells their developers that they are not allowed to write tests, if only to write their arguments down so I can refute them as the bullshit they are.

It is not career suicide to have strong opinions about the positive value of writing automated tests for code and be willing to suffer some short term pain because of it. I made it work because it's what I wanted and I was not afraid of any lasting consequences. Again, there has been no shortage of people who have asked me to come work for them because of these traits. If you care about these things, others will notice. They will ask you to come work for them too.

My commitment to testing since 2006 is not some kind of Pyrrhic crusade. It has not ruined my career -- in fact it has done quite the opposite.

The old argument about how testing doesn't add to the bottom line is one that I have shown to simply be not true. Bugs that make it into production cost more to fix than bugs found by tests when the developer is working on code. Failure to understand this is major error on the part of testing critics everywhere. If you are unable to quantify how much mistakes in production code cost you when compared to what your developers' time costs you, that's on you.

Whenever I switch jobs, I do not do it lightly. I too have a family and a long stretch of underemployment due to stubbornness would harm my family. But switching jobs frequently does not hurt your career. I've had 12 jobs in 18 years -- do you see me having problems getting new jobs and making more money with each position? This type of argument aimed at me ignores the early part of my career where I busted my ass working for people while building up the skills on my own time so I could do want I wanted. As if my early employers let me spend my work day writing books or preparing conference talks. All done after work.

Finally, when I talk to people about why they should test it is not from a throne of bones in an ivory tower. I have gone in there and worked hard to change cultures to be more open to testing. I have written countless tests and helped developers build their skills. I have worked hard to leave places in a better state than when I got there.

I am not asking for your approval. My message is not condescension but a reminder to take ownership of what you do and your choices. 10 years ago I did not imagine I would be in this position, but I do not regret anything that has happened along the way. There have been some bad choices, but there were choices I made with an understanding of the consequences to me, my family, and my career.

It's true the world isn't always the way we want it. The best way for it to stay that way is to choose to. We're not losers, we're choosers. I want you to choose a path that leads to success.

Marionette -- First Steps

February 29th, 2016

(I'm not sure I've ever done a post on a Leap Day...)

At Mozilla a lot of folks make use of automation tools in order to write tests. There's even an entire IRC channel devoted to discussions about it. As I get depper and deeper into my time at Mozilla I now have to think about how to use these tools to accomplish my testing goals. Like my post about using Docker I wanted to share my first steps in using Marionette, a set of automation tools that focus on driving a browser much in the same way Behat does. It's an essential tool for testing all the various versions of the FireFox broswers that Mozilla releases.

In this case I'm going to be highlighting the use of Marionette Driver. This is a Python library that allows you to control a browser that has support for Marionette built in.

As an aside, I find it very encouraging that the major browser companies are starting to build hooks right in to support tools that use the WebDriver API.

As the link to Marionette-enabled builds states, support for interacting with Marionette is in every recent (as of February 29, 2016) build of Firefox that is available to the public but is not turned on by default. To enable it, you will need to start Firefox from the command line and add a --marionette switch.

My examples were done on Mac OS-X El Capitan. Specific steps might be different for your environment. So let me run your through a very quick example of how Marionette does it's stuff.

First, I opened another terminal window and started up a copy of Firefox Developer Edition and started it up:

/Applications/ --marionette

Once it started, there was a notice that it was ready and listening for connections on port 2828, which is the default. Next I proceeded to use Virtualenv to create a sandboxed environment for my code to run in. Once inside this new virtual environment I installed the Marionette driver using the version of pip that Virtualenv had thoughtfully installed:

pip install marionette_driver

With the Marionette driver installed, it was time to do a simple test to make sure everything was working. I fired up a Python interpreter (2.7.1) and tried to load a web page up the same way the old documentation for Marionette client

Here's a very simple example of how to use it:

Python 2.7.11 (default, Jan 22 2016, 08:29:18)
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import marionette_driver
>>> from marionette_driver import marionette
>>> client = marionette.Marionette(host='localhost', port=2828)
>>> client.start_session()
{u'rotatable': False, u'raisesAccessibilityExceptions': False, u'takesScreenshot': True, u'acceptSslCerts': False, u'appBuildId': u'20160225004014', u'XULappId': u'{ec8030f7-c20a-464f-9b0e-13a3a9e97384}', u'browserVersion': u'46.0a2', u'specificationLevel': u'1', u'platform': u'DARWIN', u'browserName': u'Firefox', u'version': u'46.0a2', u'device': u'desktop', u'proxy': {}, u'platformVersion': u'15.3.0', u'takesElementScreenshot': True, u'platformName': u'Darwin'}
>>> client.execute_script("alert('o hai there!');")
>>> client.navigate("")
>>> client.get_url()
>>> from marionette_driver import By
>>> first_link = client.find_element(By.TAG_NAME, "a")

What did I do?

  • loaded the Marionette-driver library
  • from that library I wanted to use some functionality that's part of marionette
  • create a client that's connected to a browser running on localhost and port 2828
  • start a session
  • cause the browser to execute some arbitrary Javascript (an alert in this case)
  • navigate to a specific page
  • verify the URL
  • grab a helper for identifying elements in a page
  • find the first a tag on the page
  • click that link

I am just at the beginning of my work using Marionnette (how the heck can I click on things that are part of the brower itself and not on the HTML page). Hope this little example helps you get started too.