Testing PHP Network Code

I work on this Open Source webmail client. I don’t think I have ever written about it here before. It’s called Cypht. It connects to services, like an IMAP, SMTP, or POP3 server. It uses the PHP function stream_socket_client to create a connection to these services, then it sends commands and reads responses with standard read/write functions like fgets and fwrite.

Recently I decided I hate myself, so I tried to build a way to unit-test this. Turns out it’s possible, and not nearly as hard as I deserve. I did bang my head around the desk area for a few days figuring it out, so not a total loss. Here is how I did it.

Step 1: Abstract low-down-no-good functions

No matter how amazingly awesome your PHP code base is, if your code actually does anything and you want comprehensive unit test coverage, you have no choice but to abstract a few built-in PHP functions that simply don’t play nice (sessions, cookies, header, curl, streams, you get the picture). I use the following pattern for this:

  • Create a class of all static methods that “wrap” the naughty functions
  • Only define that class at run time if it does not already exist
  • Change your code to call the naughty_class::function version
  • Create the same class in your unit test bootstrap, that has friendly versions of these functions (like doing nothing, or returning true or whatever)
  • Include your unit test version before the run time version when running tests.
  • Realize your wildest dreams of success and good fortune.

An example:

class NaughtyFunctions {
     * @param string $server host to connect to
     * @param integer $port port to connect to
     * @param integer $errno error number
     * @param string $errstr error string
     * @param integer $mode connection mode
     * @param object $ctx context
    public static function stream_socket_client($server, $port,
        &$errno, &$errstr, $timeout, $mode, $ctx) {
        return stream_socket_client($server.':'.$port, $errno,
            $errstr, $timeout, $mode, $ctx);

Instead of calling stream_socket_client in code, we call NaughtyFunctions::stream_socket_client with the same (similar) arguments. This pattern (or something like it) is required to make this work, so no skipping step 1. It’s also a great way to deal with PHP functions that disagree with PHPUnit, and as a way to fool tests into taking a different code path they would not normally take, like by overriding function_exists for example. Here is what Cypht uses at runtime:


Step 2: Build a stream wrapper to fake out your code

In PHP you can fake a “stream” AKA a file handle or network connection, by creating and registering a “stream wrapper“. For file operations and stateless protocols like HTTP, this is pretty simple – read until the “file” ends. But for persistent network protocols, this takes a bit of cleverness.

You need the ability to read from the stream until you reach “End Of File” (EOF). But then you need to reset the EOF status the next time you issue a command, so you can read from the stream again. There is no way (I know of) to do this from within the stream wrapper prototype, and we don’t want to alter the network code we are testing.

Thus the cleverness. Using the abstract in step 1, we can save a reference to the stream resource, and rewind it every time we send a new command, effectively resetting the EOF. Seems less clever now that I write this, but it was the most difficult part.

Here is an example of of both the NaughtyFunctions class and a stream wrapper in action:

 * Generic stream wrapper. This will be extended for protocol
 * specific commands and responses.
class Fake_Server {

    /* position within the response string */
    protected $position;

    /* current response string */
    protected $response = '';

    /* list of commands to responses, varies per protocol */
    public $command_responses = array();

    /* open */
    function stream_open($path, $mode, $options, &$opened) {
        $this->position = 0;
        return true;

    /* read */
    function stream_read($count) {
        $this->position += strlen($this->response);
        return $this->response;

    /* write */
    function stream_write($data) {
        $data = trim($data);

        /* look for and set the correct response */
        if (array_key_exists($data, $this->command_responses)) {
            $this->response =  $this->command_responses[$data];

        /* request not found, so set an error value */
        else {
            $this->response = $this->error_resp($data);
        /* CLEVERNESS: here we rewind the stream so we
           can read from it again */
        return (strlen($data)+2);

    /* tell */
    function stream_tell() {
        return $this->position;

    /* seek */
    function stream_seek($pos, $whence) {
        $this->position = 0;
        return true;

    /* end of file */
    function stream_eof() {
        return $this->position >= strlen($this->response);

    /* generic error */
    function error_resp($data) {
        return "ERROR\r\n";

 * IMAP specific fake server that extends the generic one
class Fake_IMAP_Server extends Fake_Server {

    /* array of commands and their corresponding responses */
    public $command_responses = array(
        /* other commands and responses go here */

    /* IMAP friendly error */
    function error_resp($data) {
        $bits = explode(' ', $data);
        $pre = $bits[0];
        return $pre." BAD Error in IMAP command\r\n";

 * Naughty functions wrapper to be used in unit tests. Unlike the
 * run time version, this one returns a "connection" to our fake
 * server.
class NaughtyFunctions {

    /* this will hold a reference to our fake network connection */
    public static $resource = false;

    /* we can toggle this to simulate a bad connection */
    public static $no_stream = false;

    /* fake out stream_socket_client and start the wrapper */
    public static function stream_socket_client($server, $port,
        &$errno, &$errstr, $timeout, $mode, $ctx) {

        /* bad connection */
        if (self::$no_stream) {
            return false;
        /* don't call twice from the same test */
        if (!in_array('foo', stream_get_wrappers(), true)) {
            stream_wrapper_register('foo', 'Fake_IMAP_Server');

        /* open, save a reference to, and return the connection
           to our fake server */
        $res = fopen('foo://', 'w+');
        self::$resource = $res;
        return $res;

Step 3: Correlate requests and responses for your protocol

Now all you have to do is map requests to the server with appropriate (or inappropriate) responses to exercise your network code from a unit test. In this case that would be adding to the $command_responses array in Fake_IMAP_Server. This is where we cross over from “cool problem solving” to “incredibly tedious unit test production”. looks like I will be receiving extra punishment after all.

Step 4. See a doctor about your wrist pain from writing all the unit tests

Cypht has about 14,000 lines of code I need to test this way. I’m about 1% through the process. I love that it can be done without standing up an IMAP/POP3/SMTP server, but my fingers hurt just thinking about it.

Continuous testing for Cypht with Travis CI and BrowserStack

I randomly happened upon Travis CI a few weeks ago. Travis is a “continuous integration” platform that can be tied to a Github account. Every time a change is pushed to the Github repository, Travis can run all your unit tests, and it can connect to a Selenium grid provider like BrowserStack or Sauce Labs to run Selenium tests. All 3 (Travis, BrowserStack and Sauce Labs) provide free versions of their services for Open Source projects. “This sounds really cool!” I thought. And it is. But it took a wee bit of work to get it all running. By wee bit I mean a veritable shit-ton. Hopefully this post will save someone out there the hours of Travis config tweaking it took me to get everything ship-shape.

Travis has a lot of good online documentation, definitely a useful resource to get you started. Basically what Tavis does is spin up virtual machines, that you can control using its setup script. Then it will run whatever commands you want to execute your tests (in my case PHPUnit tests and Selenium tests written in python). By using it’s ability to create a “build matrix”, you can generate different server configurations to run your tests on.

As I write this I am running a build with 75 combinations (!). 5 versions of PHP x 3 different databases x 5 different web browsers. This build is not very practical since it will take about 6 hours to complete, but I had to try it once because of course I did. My standard build is 15 different server configurations (PHP versions x database types) with 5 different browsers (3 server-side combinations per browser).

In no particular order here are some tips and tricks for various parts of the configuration that took some time to figure out.

PHP versions
Setting up multiple PHP versions is a snap. You just list the ones you want in your .travis.yml file (the main Travis configuration file), and BOOM – it creates new instances for each version. The only problem I ran into with this is that PHP 7 versions do not have the LDAP extension enabled, while PHP 5 versions do. You can work around this by adding the extensions during your setup process to the travis.ini file that will eventually become the php.ini file.

PHP version setup

Fix for missing LDAP extentions in PHP 7

PHPUnit versions
PHPUnit has 2 major versions that cover PHP 5.4 through 7.1, so you will need to make sure you have the right version installed in each instance. The easiest way to do this is to wget, chmod, and mv the correct phar file based on the PHP version. Travis makes the version available as an environment variable during the setup process, so by looking at that you can grab the correct PHPUnit file.

Set up different versions of PHPUnit

If you want a PHP enabled web server for your UI tests, and I assume you do since you read this far into the post, you need to install and configure that yourself. I cobbled together a couple of examples from the Travis docs and some various other blog posts to make this work. The example from the Travis docs works fine for PHP 5, however you have to work around an issue with a missing default pool configuration for FPM using PHP 7. I also wanted an IMAP account Cypht could connect to for a more real world experience, so my setup creates a system user to test with, installs Dovecot IMAP, tweaks the configuration slightly, and starts the IMAP service.

Set up Apache with PHP FPM (and the Python Selenium driver)

Default config file for Apache used in the setup

Default FPM pool file used in the setup that fixes PHP 7

Set up a system user with a known password

Set up Dovecot IMAP

Databases are as easy as PHP version, just list the ones you want in the main Travis configuration file. However they are not configured in a way applications normally use them. The current database for an instance is in an environment variable, so you can use that to determine which database to bootstrap with whatever tables or users you need. Cypht runs tests across Mysql, Postgresql, and Sqlite3.

Database setup (note this is for the massive 75 instance build. You only need one row without the BROWSER part for each database you want to test)

Bootstrap databases for the Cypht unit tests

To run selenium tests you need to connect your Travis instance to a selenium grid, like Sauce Labs or BrowserStack. I prefer BrowserStack, but both are great. The online docs for this are pretty comprehensive, and it took a lot less time than I thought to get this working. Then I tried to use different browsers and ran into a serious problem. Chrome, Firefox, and Safari all worked fine, but Edge and Internet Explorer always failed.

By using the replay feature in BrowserStack, I could see that logins to Cypht failed with those 2 browsers. After much head scratching and keyboard bashing, I realized the issue was that these two browsers will not set cookies when accessing a site with the host name of “localhost”. Thankfully there is a work-around for this. You can force the tests to run locally, but also give them a real host name instead of localhost.

Config entry to use a different host name (the important bits are “hosts” and “forcelocal”)

Limitations for Open Source accounts
Travis will allow Open Source accounts to run 5 parallel instances, however both BrowserStack and Sauce Labs only allow 2 parallel connections to their service. In the Travis dashboard you will want to limit your parallel instances to 2 to match the Selenium provider maximum, otherwise those builds will break.

Return value
After the setup completes, Travis runs your “script command”. The return value of this command or commands will tell Travis if your tests were successful or not. You must be sure to return a non-zero value on failure, otherwise Travis won’t know something went wrong with your tests. You can string commands together with “&&” to build a set of commands to run, which will exit immediately if a non-zero value is returned by any command in the list.

Script command Cypht uses

In conclusion, Travis CI rocks for Open Source integration testing and I highly recommend it. Now I have no excuse to not write more tests!

Fun With PHPUnit

What is the first word that pops into your head when you think about unit-testing? I’m guessing “Fun”, amiright? Ok maybe not fun. Unless repeatedly slamming your head against a brick wall is your idea of fun. Trying to build comprehensive tests for existing code is an exercise in patience. Like the Olympics of patience. Sometimes when I come up for air and wipe the blood from my head (and wall), I realize that while I was toiling away in the unit-test dungeon, I stumbled on something useful. Maybe it’s a feature I had not explored before, or a solution to a tricky situation. Maybe it’s something that will help inject a little fun into your PHPUnit experience.

Before I start throwing out random ideas, let me say that I really like PHPUnit: lots of knobs, good documentation, active development. PHP is the Yoga instructor of programming languages, but that flexibility means it’s easy to write poor quality code. When code and unit-tests don’t get along, the right answer is to send the code to bed without dinner, and if it’s really bad, ground it for the rest of the week. We can blame the code all we want, but sometimes there is no choice but to tweak the system to find a way to make it all work.

One of the biggest obstacles in testing is state. Things like global variables, static values or class instances can easily create a situation in which a test passes when run alone, but fails when run as a part of a suite. There are other types of states to consider aside from just the disastrous PHP global namespace. Sessions for example. Or the state of data in a test db or on disk. Tests that fail intermittently are hard to debug, and usually it’s due to an overlooked state issue.

Tests should be as insulated and independent as possible, and PHPUnit has a feature called “runInSeperateProcess” that forces each test to run in its own PHP process. Using this feature is not as simple as it sounds. If you have a bootstrap file defined in your phpunit.xml file, and it includes any code that defines a constant, you will get an error about redefining said constant in your tests. What gives? I thought each test runs in its own process? It does, but from what I can determine (using the throw-it-against-the-wall method), only the test code itself is run per-process. Assuming that poorly substantiated statement to be true, here is a pattern that does work with process separation.

Process Separation

First the phpunit.xml file, WITHOUT a bootstrap

<phpunit strict="true" colors="true">
    <testsuite name="my_awesome_tests">

Then the my_awesome_tests.php file, with the bootstrap included in the setUp() method and the @runInSeperateProcess and @preserveGlobalState annotations.


class My_Awesome_Tests extends PHPUnit_Framework_TestCase {

    public function setUp() {
        require 'bootstrap.php';

     * @preserveGlobalState disabled
     * @runInSeparateProcess
    public function my_test_one() {
     * @preserveGlobalState disabled
     * @runInSeparateProcess
    public function my_test_two() {

Finally, the bootstrap looks something like this


/* all the things */
error_reporting(E_ALL | E_STRICT);

/* determine current absolute path used for require statements */
define('APP_PATH', dirname(dirname(__FILE__)).'/');

/* get mock objects */
require APP_PATH.'tests/mocks.php';

/* get the code we want to test */
require APP_PATH.'lib/framework.php';

/* get the stubs */
require APP_PATH.'tests/stubs.php';


mocks.php contains stand-in objects used as arguments to methods we want to test. The stubs.php file contains wrappers around abstract classes and traits so we can test them more easily. One advantage of this pattern is it makes it possible to pre-define constants in the setUp() method before the code being tested is loaded, so a test can exercise a code path that triggers on a non-default constant value (Assuming the code being tested checks for an already defined constant). Since mocks are loaded before the code being tested, we can also leverage this to override things unfriendly to testing.

Overriding Stuff

It’s good practice to limit mocking and overriding to a minimum. The more code that is mocked out, the less actual code that is being tested. There are however some built-in PHP functions that simply don’t play nice, like setcookie or header or session_start or error_log or die – you get the idea. Using the pattern as described above in “Process Separation”, we can easily add some override behavior to deal with these problems (sadly this does require changes to the code being tested).

In our mocks.php file we create a class of all static methods for built-in functions that don’t play well with others.

class BuiltIns {
    public static function php_header($header_str) { return true; }
    public static function php_die($msg) { return true; }
    public static function php_error_log($str) { return true; }

In the code to be tested, we setup a mirror image of this class that runs the actual built in functions, but only loads if the class is not yet defined.

if (!class_exists('BuiltIns')) {
    class BuiltIns {
        public static function php_header($header_str) {
            return header($header_str);
        public static function php_die($msg=false) {
            return die($msg);
        public static function php_error_log($str) {
            return error_log($str);

Then we replace occurrences of these built-in functions in the code to be tested. So this:

if ($error_str) {

Becomes this:

if ($error_str) {

WOOT! Now we don’t have to worry about an errand error_log spoiling our unit-test party. We can even do something useful in the mocked out versions, maybe a fake session_start() call can populate $_SESSION or another constant in setUp can toggle success or failure from the mocked out function. The sky is the limit people!


I have only recently started to look at PHPUnit’s coverage options. When I first tried it out, it bailed with a cryptic message and I was sad. Some head scratching and a few apt-get installs later, I was blown away. The HTML coverage report is incredibly useful. By default it sucks up other included files, so if you are dealing with a big code-base it can be handy to limit coverage to just code you are actively testing. I like to define these limits and enable the coverage report in my phpunit.xml file with something like this:

        <whitelist addUncoveredFilesFromWhitelist="false" processUncoveredFilesFromWhitelist="false">
            <directory suffix=".php">../lib</directory>
        <log type="coverage-html" target="./coverage.html" charset="UTF-8" highlight="false" lowUpperBound="35" highLowerBound="70"/>

The report is comprehensive. It has summary charts for coverage, complexity, risk analysis, and even line by line detail that makes it brain-dead easy to see what your tests are hitting, and more importantly, what they are missing. Coverage alone does not make a good unit test, but it’s a great tool to help improve your tests.

Extending PHPUnit_Framework_TestCase

This post is really dragging on so I will leave you with one additional trick I came across building unit-tests for a billing system. We wanted a test suite that we could run across a variety of products, but the tests code would be nearly identical. Duplicating the tests for each product was a maintenance nightmare. We needed a way to run virtually the same test code across multiple products. Here is what I came up with:

Start by extending the PHPUnit_Framework_TestCase class with an abstract class. This will be where the actual test code lives.


abstract class Base_Tests extends PHPUnit_Framework_TestCase {
    public function test_something() {
    /* your normal test code goes here */

Next extend that class for each product you want to test, and define the product as a member variable in the setUp() method so it can be accessed from the test method scope:

class Product_A_Tests extends Base_Tests {
    public function setUp() {
        $this->product = 'product a';

PHPUnit will not try to run the tests in the abstract class, but it will find and run the tests in the classes that extend it. For each product you want to test, just extend the base class and define the product details in setUp().

I’m hardly a PHPUnit expert, and there are surely improvements to these ideas or even completely better ways to accomplish the same thing. All of these examples minus the last one were taken from a unit-test suite for some Open Source software I’m working on. You can see the still-in-progress test code at Github, and an example of the coverage report from PHPUnit at jasonmunro.net.