Emerging Tests Pyramid

External test is green with a Magic Number.

describe('Tests', () => {

    it('can be external and focus on feedback', () => {
        expect(new Component().doThat()).to.equal(42);
    });
});

class Component {
    constructor() {
        this.service = new Service();
    }
    doThat() {
        return this.service.doThis();
    }
};
class Service {
    doThis() {
        return 42;
    }
};

I want the service to fetch the value and remove this Magic Number.

describe('Tests', () => {

    beforeEach(() => {
        fetch = sinon.stub();
    });

    it('can be external and focus on feedback', () => {
        fetch.returns(42);

        expect(new Component().doThat()).to.equal(42);
    });
});

let fetch;

class Component {
    constructor() {
        this.service = new Service();
    }
    doThat() {
        return this.service.doThis();
    }
};
class Service {
    doThis() {
        return fetch('key');
    }
};

This fetch takes a parameter that I had not seen coming.

Maybe I should revert and reconsider this move.

I hard-code the parameter with a random value.

My test harness knows nothing about this new Magic Number. I can put any value here and my test harness stays green.

A very uncomfortable position.

What now ?

Options:

  1. revert
  2. modify the external test against Component and add a parameter constraint on the stub
  3. add an external test against Component to check the parameter with a mock
  4. add an internal test against Service to check the parameter with a mock
  5. revert … never forget this one so better have it twice

Say I don’t revert. Maybe I trust that I can fix this quickly.

Looking at what happens when the tests fail can help with this decision. the error message for option #1 does not help me much and I have to be smart to understand that the stub returned nothing which created issues down the road.

Keeping an eye on the Tests Pyramid also helps with this decision. Internal tests will help pinpoint an issue and too many external tests might make the feedback loop too slow. This scenario above is a good candidate for option #3.

Play with the example above to see the different error messages when the tests fail.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.