Stop Telling Me To Refactor

December 14th, 2014

I got an email from Daniele earlier this morning about the post I did talking about how web acceptance tools suck and they were kind enough to share their thoughts on how they felt I was (to use their words) "facing the problem in the wrong way".

They went on to describe how I should just separate the front-end from the back-end to make testing the app as a whole easier. They shared a link to a project where they had done that so go take a look.

I don't find much wrong with that approach but it has a hole in it big enough to throw my ego through without scraping the sides. Daniele, this has nothing to do with you and everything to do with a mind set that continually comes up in our industry.

The incredible casualness with which we tell people that the solution to all their problems is to refactor things.

Just stop it. Stop telling me that. Stop telling other people that. It reflects that you have given no thought to the other person's situation. Do they have the time to refactor? Do they have permission to refactor? Are they even skilled enough to refactor the app in such a way to make it better? These are all legitimate questions. Blindly recommending refactoring isn't the solution.

In my case for my current work-related activities, there is zero chance we will be doing the kind of refactoring that Daniele is very-helpfully recommending. Why? The application in question works and brings in money. So the app in it's current form needs to stay up and keep working. So that means I need to find the least-invasive way possible to test the application's UI.

There are other refactorings I am doing (moving the app from ZF1 to ZF2 is among them) but those take time and have to ensure the existing application doesn't crash and burn due to those changes. I suspect I may be ending up buying a bunch of over-worked QA people in Kiev a nice thank-you present when they finish the extensive testing I need them to do.

Send me your thoughts and ideas on Twitter or to chartjes@grumpy-learning.com

Web Acceptance Tools Suck

October 31st, 2014

Wow. Almost a whole year since I posted something. Time to change that.

It is not a coincidence that I write this blog post on Halloween 2014. Because I want to talk about something that should make your skin crawl and want to turn all the lights on in the location where you are reading this.

I am talking, of course, about writing automated web acceptance tests.

At ZendCon2014 I did a talk about how to use Behat and Mink to write some automated acceptance tests for your web sites. You know, the type of thing where you drive a browser using some code and it pretends to click on things...and you lie to yourself about what value all this work is giving you.

I also made the classic mistake of waiting too long to go over my slides for this talk and realized I was using versions of these tools that were extremely out of date. I then spent 6 hours updating the code, instead of watching my friends give awesome talks about PHP and complementary technologies.

This sucked. Sucked big time.

As far as I can tell, we are currently at the point in the PHP community with these tools like unit testing tools were 10 years ago when I first decided I needed to find ways to never work 120 hours of overtime in 6 weeks leading up to Christmas ever again.

Behat and Mink are both very powerful tools, but their documentation is lacking and the code samples review what I think is a very large disconnect between how the creators of these tools use them and how the rest of us use them. I have no idea if there is even any blame to be handed out, unless you subscribe to the theory that shitty documentation is the fault of the people who created the projects in question.

I am not diminishing the work of those who created these things. They are great programmers, and I am just a grumpy guy who wants things to work a certain way and not act like obstacles. I like my tools to be complementary and easy to figure out how to bend to my will.

But when I look at how brittle these things are, and how slow they are, it makes me wonder if I wasted my time learning how to use them.

For those who don't understand what I am getting at, or think I am acting like the drama queen they secretly hope I am, consider this -- the best way to identify elements on a web page is to use tools that expect you to either fully understand CSS and also how XPath works.

Folks, this is a shit show.

Now, it is entirely possible I am simply Doing It Wrong, and will be happy to be corrected. Take a look at the code in this repo AND DESPAIR. This code is brittle, has to maintain state between test steps manually, and the tests take forever to run because you have the overhead of starting up a browser and painfully crawling through the DOM to find things.

(By the way, who is the person that decided the DOM was the best way to internally represent elements on a web page? If it was Tim Berners-Lee, well, it strikes me as a decision that is biting us in the ass now but was probably totally logical at the time)

I am hopeful that someone out there is working on a better set of abstractions and tools that will make the types of things we are asking Behat and Mink and Page Objects and PhantomJS to do a lot better.

Send me your thoughts and ideas on Twitter or to chartjes@grumpy-learning.com and I will do a follow-up post.

Test Spies and Mockery

December 27th, 2013

While recording some screencasts I was struggling to figure out how to get PHPUnit's built-in object mocking tools to allow me to create what is known as a "test spy". I talk about them briefly in my PHPUnit Cookbook but I think that what I wanted to do in this instance was beyond what PHPUnit could give me.

I had some code-under-test that had a conditional statement inside a foreach() loop (aggravating my desire to use object calisthenics) and I wanted to make sure that both branches of the conditional statement got executed.

I first tried something like this:

    // $db is our mocked database object based off stdClass for testing
    $db->expects($this->once())
        ->method('query')
        ->with($update, ['id' => 1]);
    $db->expects($this->once())
        ->method('query')
        ->with($delete, ['id' => 5]);

I was using Aura.Sql and it's Update and Delete objects. I wanted to be sure that I was using both objects.

I also tried using $this->at(0) and $this->at(1) as well, I got errors ranging from "method query was not mocked" to problems complaining about expected values not showing up at the expected sequence.

I knew there had to be a better way, but I really wanted just to use PHPUnit's built-in mocking. I couldn't figure it out. So instead I turned to a mocking library that I knew supported test spies: Mockery.

The code reads a lot smoother:

    // m is an alias to \Mockery
    $db = m::mock('stdClass');
    $db->shouldReceive('query')->with($update, ['id' => 1])->once();
    $db->shouldReceive('query')->with($delete, ['id' => 5])->once();

The first thing that jumps out at me is that the Mockery version looks cleaner. Well, really, it's only one less chained call. But looks do count for something.

More importantly, my test worked the first time with no weird error messaging about unexpected behaviour.

So the next time you are writing a unit test and need to create spies on methods of a mocked object, I cannot recommend enough that you take a look at Mockery.