Skip to main content
Pixel cat

Shipping faster with more confidence

technical  Ash Allen · Sep 8, 2025

At Fathom Analytics, we've been working hard on improving our development workflow to help us ship faster and with more confidence. Over the past few months, we've spent time identifying areas to improve, trialling different tools and processes, and iterating on them until we found a setup that works for us.

In this article, I want to share the bottlenecks we identified, the changes we made, and the results we've seen so far.

Identifying places to improve

Before we started rolling out any changes to the codebase, we had to identify any weak spots in our development workflow. Were there any bottlenecks that were slowing us down? Any processes that were causing friction? Were there any manual processes which we could automate to save time and reduce the chance of human error? Or, anything we could do to increase our confidence when making changes to the codebase?

We identified a few key areas which we wanted to improve:

  • Test coverage - Our test coverage was around 60%. Although this is still a decent level of coverage, it could be better.
  • No CI/CD pipeline - We had a test suite that we could run locally. But there was no continuous integration/continuous deployment (CI/CD) pipeline in place to run the tests automatically on every pull request (or push to the main branch). This meant that before deploying, one of us would need to run the tests locally.
  • No static analysis - We didn't have any static analysis tools in place to catch possible bugs that our tests might not cover.
  • Manual deployment process - Deploying the code to production was a manual process.

The changes we made

After identifying these key areas, we slowly started making changes to the codebase to address them. Let's delve into each of these changes in more detail.

Increasing the test coverage and test quality

If you've not heard of "test coverage" before, it's a measure of how much of your application's code is executed ("covered") when running your tests.

For instance, let's take this simple greet function, which returns a greeting based on the name provided:

function greet(string $name): string {
if ($name === 'Fathom') {
return 'Hello, Fathom!';
}
 
return 'Hello, stranger!';
}
 
greet('Fathom'); // "Hello, Fathom!"
greet('Luna') ; // "Hello, stranger!"

Then let's say we have a single test that runs this code:

public function test_fathom_is_greeted_correctly()
{
$this->assertSame(
expected: 'Hello, Fathom!',
actual: greet('Fathom')
);
}

In this case, we have tested the lines in the greet function that are run when the name is "Fathom", which means these lines are "covered". But we haven't tested the lines which handle any other name. So those lines are considered "uncovered", which means we have no indication of whether those lines work as expected or not.

A general rule of thumb is that the higher the test coverage, the better, as it's more likely to highlight bugs. But this isn't always the case, and it's not just about the numbers. It's about the quality of the tests. So you can't just chase a certain coverage percentage and call it a day. For example, our test could look something like this:

public function test_fathom_is_greeted_correctly()
{
$this->greet('Fathom');
 
$this->assertTrue(true);
}

This test would technically give us the same test coverage as the previous test. But it doesn't actually assert that the output of the greet function is correct. So, although it'd increase our test coverage percentage and indicate to us that the function can run without erroring, it doesn't actually give us any confidence that it works as expected.

For this reason, we made sure to audit our existing tests and improve the assertions where necessary. Although this didn't directly increase our test coverage, it did give us more confidence that our test suite was actually testing what we thought it was.

Following this, we focused on writing more tests to cover the previously untested parts of the application. We achieved this by incrementally adding tests over time and making minor changes to the application code as we went.

As a result, we increased our test coverage from around 60% to 88%. By doing this, we now have a ton more confidence when making changes to the codebase. We can use the tests to prove that our code works and that we haven't broken any existing functionality. Of course, things can still break, and bugs can still creep in unnoticed by the tests. But by improving the quality of our test suite, we've reduced the chance of this happening.

Adding static analysis with PHPStan/Larastan

Although our test suite can catch many potential bugs, some may still slip through the cracks. For this reason, we decided to add static analysis to our codebase using PHPStan (with Larastan for Laravel support).

If you're not familiar with static analysis, it's a way of analyzing your code without actually running it. In our case, we can analyze our code by executing a command in our terminal. It can catch potential bugs, code smells, and other issues. Technically, Larastan needs to boot up the Laravel framework, so it's not pure static analysis. But for the purposes of this article, it's pretty close, and we'll just refer to it as static analysis.

PHPStan is a fantastic tool and has stopped me from shipping bugs for years. But if you've ever added it to an existing project (especially one that's at least a few years old), you'll know that it can be a bit daunting. Even running it at the lowest level can throw up hundreds of errors. Thankfully, most of these are usually pretty easy to fix and don't require any application code changes. For example, they might just require adding some docblocks to help PHPStan understand the data types being passed around.

So, as you imagine, when we first added PHPStan to the codebase, a ton of "issues" were highlighted. Now I use the term "issues" loosely here, and in quotes, because a lot of them weren't actually issues or bugs, but instead just needed some docblocks and comments added. We started from level 0 (the lowest, least strict level) and slowly solved the issues in between working on new features. Over a week or so, we managed to get it down to 0 errors. Woohoo!

Then we bumped it up to level 1 and repeated the process. Now, at the time of writing, we're at level 2. But depending on when you're reading this, we may have bumped it up again (I'm aiming for at least level 5!).

Adding PHPStan has been a massive win for us. It's caught some minor bugs that our tests didn't, and it also helped to identify some dead code that we were able to remove. Deleting dead code is always a nice win because it makes your codebase smaller and easier to maintain.

It's also been very helpful that we can incrementally add it to the codebase. We didn't have to do it all in one go and aim for the highest level straight away. As a result, we've been able to continue delivering new features, bug fixes, and improvements, while also putting the processes in place to help us ship faster in the future.

Adding GitHub Action workflows

One of the biggest wins for us has been adding GitHub Action workflows to automate our tests, static analysis, and deployment process. I'd even go as far as saying that this has been the single most significant improvement to our development workflow.

As we've mentioned earlier, we spent some time increasing our test coverage, improving the quality of the test suite, and then adding PHPStan. But these were still being used as local developer tools. This meant that during a code review, or before manually triggering a deployment to production, we'd need to pull down the latest code and run the tests and PHPStan locally. If they passed, then we could proceed with the review or deployment. But as you can imagine, this was a bottleneck.

So we automated it by creating some new GitHub Action workflows which would:

  • Run the test suite on every pull request.
  • Run PHPStan on every pull request.
  • Validate our Composer dependencies on every pull request.
  • Run the test suite on every push to the main branch, and then deploy the code if it passes.

Now, whenever we open a pull request, the tests and PHPStan run automatically. Since they run relatively quickly, we can get feedback within minutes. If either of them fails, we can go back and fix the issues before letting someone review the code. This has sped up the review process by preventing the review of broken code that will need fixing anyway.

We also have a workflow which validates our Composer dependencies to ensure we aren't using any abandoned packages or packages with known security vulnerabilities. This workflow runs on every pull request, and if any issues are found, we can address them before merging the code. This prevents us from knowingly shipping code with security vulnerabilities.

Our most useful workflow is the one that runs on every push to the main branch. It does a final run of the tests, and if they pass, then it deploys the code to production. This workflow has been a massive win for us because it removes the manual step of having to run the tests and trigger the deployment. It also acts as a final safety net to catch any issues that might have slipped through the cracks during the pull request reviews. If the tests fail, the deployment is stopped, which prevents us from potentially releasing broken code to production.

The results

As I mentioned at the beginning of the article, our objectives were to enable faster feature shipping, with greater confidence and a lower bug risk.

Currently, I don't have any hard metrics to share that prove we've achieved these objectives. But, anecdotally, we've definitely achieved them.

Shipping faster with more confidence

By removing the bottlenecks in our development workflow, we've been able to ship features and bug fixes much faster.

Since implementing these processes and tools, we've been able to respond to user feedback much faster. For example, if a user reports a bug, we can write a test to prove it exists, fix the bug, and then run the tests to prove it was fixed. This whole process can happen within hours (or even minutes sometimes) of the bug being reported.

I've also noticed that we've been able to respond to feature requests much faster, too.

PHP and dependency upgrades are easier

Another great benefit of having these processes in place is that it has made upgrading PHP, Laravel, and other dependencies much easier.

For instance, when we upgraded to PHP 8.4, several deprecations needed to be addressed. Without a good test suite and static analysis in place, we likely wouldn't have spotted these as easily, and they may have caused issues further down the line. But thanks to these tools, we were able to identify them within seconds and fix them quickly.

It has also made the upgrade process for Laravel much easier because we can be sure that our code is compatible with the new version. We can run the tests and static analysis to prove that everything works as expected.

What's next?

Although we've made some significant improvements to our development workflow, there's still more we can do. We're constantly reviewing our workflow and identifying areas for improvement.

Some of the things we'll potentially explore in the future are:

  • Adding browser tests (using something like Pest 4's new browser testing capabilities) to get more confidence that our application works as expected from the user's perspective.
  • Adding mutation testing using Infection to help us improve the quality of our test suite even further.
  • Using Rector to help us automatically refactor and upgrade our codebase.

But, overall, I'm really pleased with the improvements we've made, and I think they've had a positive impact on the way we work. All these improvements are helping us to ship the best analytics product we can!

Return to the Fathom Analytics blog

Ash Allen

BIO
Freelance Web Developer

Pixel cat

Tired of how time consuming and complex Google Analytics can be? Try Fathom Analytics:

Start a free trial

Sign up for our monthly newsletter via email, or grab the RSS feed.