Testing Laminas API end points

December 11th, 2020

Testing Laminas API end points

As I write this, I have been working for Unnamed Financial Services Company for 3 months. It has been a good exercise in figuring out how to do a slow-but-steady migration of a CodeIgniter 3.x application to a macro-service where we are moving some things to an API and will move the other functionality (which is mostly used by the customer service folks at UFSC) into a different application.

For the API side of things I decided we should go with Laminas API tools due to tooling and the flexibility that we can get from writing our own glue code to solve particular problems. As much as I am an old veteran of full-stack PHP frameworks, our architectural plans leave me worried that I would end up fighting with the conventions of one of those types of frameworks too much.

So, having used the admin UI to create an API end point, and organizing our code that interacts with the database into some abstractions that I think make sense and provide us with some much-needed structure, I have a single-action controller that handles the API call:


namespace Lead\V1\Rpc\FindById;

use Application\Repository\DoctrineLeadRepository;
use Doctrine\ORM\EntityManagerInterface;
use Laminas\Mvc\Controller\AbstractActionController;
use Laminas\ServiceManager\ServiceLocatorInterface;

final class FindByIdController extends AbstractActionController
    private EntityManagerInterface $entityManager;

    public function __construct(ServiceLocatorInterface $serviceLocator)
        $this->entityManager = $serviceLocator->get('doctrine.entitymanager.orm_default');

    public function findByIdAction(): array
        $leadId = (int) $this->getRequest()->getQuery('id');
        $lead = (new DoctrineLeadRepository($this->entityManager))->findById($leadId);

        if ($lead !== null) {
            return $lead->toArray();

        return [];

Now the discussion worth having: how do we test this thing?

For pragmatic reasons I decided that I would not go the path of creating test doubles for everything and then using the Laminas service manager to replace the existing dependencies with doubles. I'll just use the real database that the DoctrineLeadRepository will be talking to and get on with a test.

So, we have two scenarios that we need to test:

  • Does it behave correctly if it cannot find a lead in the database
  • Does it behave correctly if it finds a lead and returns some response

Okay, so I get started with my test. The first step is creating a skeleton that reads in the Laminas-specific configuration options for the main Application module and then the module that contains our API controller action.


namespace ApplicationTest\Lead\V1\Rpc\FindById;

use Laminas\Stdlib\ArrayUtils;
use Laminas\Test\PHPUnit\Controller\AbstractHttpControllerTestCase;

final class FindByIdControllerTest extends AbstractHttpControllerTestCase
    private \Laminas\ServiceManager\ServiceManager $serviceLocator;

    protected function setUp(): void
            include __DIR__ . '/../../../../../../../config/application.config.php',
            include __DIR__ . '/../../../../../../../module/Lead/config/module.config.php',
        $this->serviceLocator = $this->getApplicationServiceLocator();

Okay, now a test for the first scenario:


    // Addtional dependency...
    use Lead\V1\Rpc\FindById\FindByIdController;

    /** @test  */
    public function it_handles_missing_lead_correctly(): void
        $response = (new FindByIdController($this->serviceLocator))->findByIdAction();
        self::assertEquals([], $response);

This one passes since the code in the controller action behaves as follows:

  • there is no value in the query for 'id'
  • so when it tries to retrieve a Lead it will get back null
  • so it returns an empty array

The next scenario was trickier. It was not obvious to me how I inject a query parameter into the request. I was used to other frameworks where I could add a parameter to the action of the controller and the framework would automagically inject that corresponding HTTP query parameter.

Using an old tactic, I started digging around in the tests for the Laminas MVC package to see how they were testing things. It took a while and some trial and error, but I did figure out.

// New dependencies added use Laminas\Http\Request; use Laminas\Mvc\MvcEvent; use Laminas\Router\RouteMatch; use Laminas\Stdlib\Parameters; /** @test */ public function it_finds_something_that_looks_like_a_lead(): void { $controller = new FindByIdController($this->serviceLocator); // Create what route we want to execute $routeMatchParams = [ 'controller' => 'Lead\\V1\\Rpc\\FindById\\Controller', 'action' => 'findById' ]; $routeMatch = new RouteMatch($routeMatchParams); // Build up the request that contains our lead ID $request = new Request(); $request->setQuery(new Parameters(['id' => 2])); $request->setMethod('GET'); $request->setUri('/lead/find'); // Create an event that the app is listening for // and tell the controller to use it $event = new MvcEvent(); $event->setRouteMatch($routeMatch); $event->setRequest($request); $controller->setEvent($event); // Get our response $result = $controller->dispatch($request); // Make sure that it is actually a lead self::assertEquals(2, $result['id']); }

While the test passes, in the next version I want to create a Lead as part of the test, store it in the database, and then make sure I retrieve the one I expect. Hard-coding is okay for the first pass but should not be in the final version.

Hopefully this blog post helps you solve your own Laminas-related problems faster than I did. ;)

Testing Legacy Apps - Episode 1

June 19th, 2020

(If you like the work I have done with testing PHP code and OpenCFP, please consider sponsoring me at https://github.com/sponsors/chartjes/)

Testing Legacy Apps - Episode 1

I have an application that has been in use by the people who participate in my longest-running hobby for more than a decade. It has no tests. I have no excuses other than laziness. Time to change that.

In this continuing series of blog posts I am going to show you how to start from having no tests and ending up with a test suite that covers the behaviour of your application that matters the most. Along the way I am going to teach you what I feel is a repeatable framework for approaching testing existing code.

I also want to emphasize that my approach doesn't depend on the framework you are using. Most PHP web application frameworks include helpers to make testing easier, and this is a good thing! But there is probably more PHP code out there either built without a web application framework or one without helpers for testing that we need to learn some new techniques for writing tests.

For this whole series there are some constraints we are going to be dealing with:

  • I'll be using PHPUnit to write the tests
  • I'll be refactoring code to make it easier to test
  • I won't be adding any new features

With those three conditions in place, let's pick something and get started.

What Is The Application's Domain

The application we are writing tests for is for handling transactions and managing rosters for a tabletop baseball simulation league. The most popular game of this genre is Strat-O-Matic Baseball and the league I am in uses a game we created ourselves.

I'm sorry if some of the terms I end up using are ones you don't understand as some domain knowledge is required to figure out tests.

I created a web application to manage all the roster stuff more than a decade ago, and slapped it together quickly because we needed something. Over the years it's been tweaked but I have been lazy and justified not having any tests for it because "I understand the domain well enough to manually test it". Shockingly, I still manage to break things.

As with any long journey, it begins with a step. I'm going to pick something that I constantly break and wrap some tests around it.

What is our first testing scenario?

As part of this whole series, I also want to emphasize that I will be focussing on not testing the code but coming up with tests for how the application is supposed to behave. To figure this out, I need to identify what parts of the application need to work all the time.

  • making trades should not break
  • signing free agents should not break
  • entries in the transaction log should always be correct

Those are the high level tests that need to be written. There are also some tests at a lower level that need to happen to satisfy the conditions above. I keep finding ways to break some functionality that deals with indicating whether or not a player had a "card" in the game (basically, did we print a card for that player for a specific year). Why does it keep breaking? Because I did not create one centralized location that is the source of truth for a player.

I've written tests for this functionality before as examples for my books and presentations but I feel like it is time to take a different approach and build something easier to work with.

When I am creating testing scenarios I like to use the language that people who practice Behavior-Driven Development tend to use. So here is a great testing scenario to start with:

Given I am a Player When I have no card for the current season Then indicate my uncarded status And indicate the season I was uncarded for

Now, when I look at the code I have already I am doing this...but I don't like it. I have created the idea of a Roster and created a 'model' for it and I just deleted some tests I had for it. Instead, I think I want to drill a little deeper into this problem and come up with a better fix.

A Roster is a collection consisting of one or more objects. They are "batters", "pitchers", and "draft picks". Right now I am not making that distinction. Here's the code that I wrote that looks at a player's "uncarded status" and figures out what additional information needs to be displayed next to the player's name:

    public function getBattersByIblTeam($ibl_team): array
        $sql = "SELECT * FROM rosters r WHERE r.ibl_team = ? AND r.item_type = 2 ORDER BY r.tig_name";
        $stmt = $this->pdo->prepare($sql);
        $results = $stmt->fetchAll(PDO::FETCH_ASSOC);

        if (!$results) {
            return [];

        $roster = [];

        foreach ($results as $row) {
            $displayName = trim($row['tig_name']);

            if ($row['uncarded'] === $this->previous_season || $row['uncarded'] === $this->current_season) {
                $displayName .= ' [UC' . $row['uncarded'] . ']';

            $row['display_name'] = $displayName;
            $roster[] = $row;

        return $roster;

This is not bad code by any means -- it works. From a testing perspective there are all sorts of problems with it due t

Some Testing Theory

May 14th, 2020

TL;DR - Focus on testing behaviours instead of testing code

(If you like the work I have done with testing PHP code and OpenCFP, please consider sponsoring me at https://github.com/sponsors/chartjes/)

Some Testing Theory

While doomscrolling on Twitter, I saw internet-friend Snipe ask the following question:

When writing unit tests, do you typically write a test to check if the model is saved (i.e. create it via factory, check it passes built-in validation)? Feels a bit too much like testing the framework (or the factory) to me

To which I responded

Personally I am writing unit tests to verify behaviour, and if it requires making sure the model was saved then I will check

Snipe followed-up-with:

In acceptance or functional tests, sure - just seems weird to have them in unit tests.

Her response got me to thinking about some testing theory ideas that have changed how I approach the tests I write and how I categorize a particular test. As always, there are multiple approaches to solving these problems -- "trust, but verify" is a good practice.

Test Behaviours, Not Code

No matter what type of test I am writing (more on that later) I always ask myself "what test will prove this code is behaving as expected?". This is a different approach from "what parts of the code am I going to test". My experience has been that when you focus on testing behaviours, you end up writing fewer tests but with the same level of coverage of the code under test.

Focussing on behaviour also means you do not have to make context switches when you start thinking about tests of different types.

Test Types

Commonly these have been referred to as unit vs integration vs acceptance. Labels can change over time and right now I have settled on three types of tests:


Microtests are tests that are verifying the behaviour of a single object in isolation. Calling them unit tests works here too. The additional pressure being applied here is whether or not you will use real versions of the dependencies the code you are testing requires or if you will create test doubles.

Both approaches have benefits and drawbacks. Tests with doubles tend to run quickly but have the maintenance overhead of needing to be updated if the behaviour of the doubles no longer matches the dependency. Tests what use real dependencies are slower and can have the maintenance overhead of needing databases or services to be made available and updated on a regular basis.

These are typically written a testing framework and can be automated via CLI tools.

Integration Tests

Integration tests are tests that verify that the behaviour of two objects, when interacting with each other, is as we expect. These tests should almost always use real dependencies unless there is a really good reason not too. Maybe something like an API with sandbox access because the API you are using in production is rate limited or charges per use. These things should be exceptions rather than a common practice.

These are also typically written with a testing framework and can be automated via CLI tools. The goal of this level of tests is to act as a filter that catches any bugs that your first layer of microtests missed.

Acceptance Tests

Acceptance tests are tests that verify that the behaviour of the application is correct, meaning that multiple objects will be interacting with each other using real dependencies. These sort of tests are usually conducted manually or built using some kind of automation framework that can drive a client application (usually a web browser).

Just like the integration tests, this layer should be catching any bugs your microtests and integration tests didn't find. Tests are usually written by humans, so there are some scenarios and edge cases that were not considered when the tests were written. All you can do is write code as defensively as possible, carefully consider your testing scenarios, and hope that nothing goes horribly wrong in production.

Back to Snipe's Question

So the original question is "should my unit tests be checking that data is saved?". The answer, in my mind, is that if the behaviour you are testing requires you to verify that data that was just created is saved and contains data you expect, then you will need to use real models with a real database connection.

I emphasize there is no wrong answer to Snipe's question! It is a matter of deciding on an approach and dealing with the associated technical debt.