Laravel tips from Fathom Tips on using Laravel for your SaaS applications at scale. 2023-06-07T10:15:53+00:00 en-us Jack Ellis, Fathom Analytics SingleStore is now faster on Apple Silicon 2023-01-13T00:00:00+00:00 2023-01-13T00:00:00+00:00 Jack Ellis, Fathom Analytics Jan 13, 2023 PT - <p>The MacBook Silicon processors are incredible, there's no debate there, and various tools have offered fantastic emulation for arm64 and other processor architectures. The problem is that it's always been slow in some areas. Phase one of us running SingleStore on my MacBook M1 was made possible by work from <a href="">Charlie Joseph</a>, who managed to get SingleStore running on Docker via emulation. Phase two was made possible by <a href="">Carl Sverre</a> and his team via the <a href="">SingleStoreDB Dev Image</a>. And at this point, things were pretty good, but they still needed improvement, as emulation wasn't as fast as we know it could be.</p> <p>Fast forward to 12th January 2023, and Docker releases BETA support for emulation via Rosetta. We knew this was coming, and I'd been optimistic that performance would improve, and it has. But let's stop with the talking and get to the benchmarks. After all, that's why you're here.</p> <h2>Benchmarks</h2> <p>A few notes for the benchmarks you'll be seeing below:</p> <ol> <li>SingleStore uses something called a query plan cache. Long story short, the first time you run a query, there's a slight lag as SingleStore's engine decides on the BEST way to run the query. The benchmarks are run <em>after</em> that initial query plan cache has run because, in production, this initial query plan cache run is irrelevant.</li> <li>The benchmark values are an average of 5 runs.</li> <li>I profiled these queries via SingleStore studio.</li> <li>I'm running on SingleStore 8.0.4.</li> <li>My machine is a MacBook Air M1 2020, so many of you will see WAY better performance than me (e.g. if you're on a newer MBA or MBP).</li> <li>The benchmarks are done in milliseconds, and I've rounded to 2 decimal places.</li> <li>The benchmarks are not testing for concurrency because this is local development.</li> </ol> <p>Now let's get to testing.</p> <h3>Basic queries</h3> <p>I've performed some basic CRUD queries below and some database creation to see how these behave.</p> <table> <thead> <tr> <th><strong>Query</strong></th> <th><strong>Before Rosetta</strong></th> <th><strong>After enabling Rosetta</strong></th> <th><strong>How much faster is Rosetta?</strong></th> </tr> </thead> <tbody> <tr> <td>CREATE DATABASE test;</td> <td>3,083 ms</td> <td>2,954.75 ms</td> <td>4.16%</td> </tr> <tr> <td>DROP DATABASE test;</td> <td>939 ms</td> <td>740.33 ms</td> <td>21.15%</td> </tr> <tr> <td>CREATE TABLE users (id BIGINT AUTO_INCREMENT PRIMARY KEY, name TEXT, email TEXT);</td> <td>121 ms</td> <td>28.75 ms</td> <td>76.24%</td> </tr> <tr> <td>INSERT INTO users (id, name, email) VALUES (1, 'Jack Ellis', '');</td> <td>5.4 ms</td> <td>2.66 ms</td> <td>50.74%</td> </tr> <tr> <td>SELECT * FROM users WHERE id = 1;</td> <td>4.8 ms</td> <td>2.5 ms</td> <td>47.91%</td> </tr> <tr> <td>UPDATE users SET name = 'Jack Google' WHERE id = 1;</td> <td>6 ms</td> <td>3 ms</td> <td>50%</td> </tr> <tr> <td>DELETE FROM users WHERE id = 1;</td> <td>5.8 ms</td> <td>2.25 ms</td> <td>61.21%</td> </tr> </tbody> </table> <h3>Aggregations</h3> <p>Now we get into my favourite area, aggregations. I've loaded in <strong>3,564,346 pageviews</strong> that belong to (our website). In reality, I wouldn't be using this much data locally unless I was debugging a specific data issue. But I know we'll see the most variance in the benchmarks if I use more data. Clearly, we're not going to get the same performance as when SingleStore is running on intel processors but let's see how things go.</p> <table> <thead> <tr> <th><strong>Query</strong></th> <th><strong>Before Rosetta</strong></th> <th><strong>After enabling Rosetta</strong></th> <th><strong>How much faster is Rosetta?</strong></th> </tr> </thead> <tbody> <tr> <td>SELECT SUM(pageviews) from pageviews;</td> <td>67.6 ms</td> <td>7.4 ms</td> <td>89%</td> </tr> <tr> <td>SELECT pathname, SUM(pageviews) as x from pageviews GROUP BY pathname ORDER BY x DESC LIMIT 100;</td> <td>125.2 ms</td> <td>16.2 ms</td> <td>87%</td> </tr> <tr> <td>SELECT pathname, SUM(pageviews) as x FROM pageviews WHERE referrer_hostname = ‘' GROUP BY pathname ORDER BY x DESC LIMIT 100;</td> <td>133.6 ms</td> <td>12 ms</td> <td>91%</td> </tr> <tr> <td>UPDATE pageviews SET utm_campaign = ‘boop’;</td> <td>22,000 ms</td> <td>3,593 ms</td> <td>83.66%</td> </tr> <tr> <td>DELETE FROM pageviews WHERE referrer_hostname = '';</td> <td>610.33 ms</td> <td>93 ms</td> <td>84.76%</td> </tr> <tr> <td>DELETE FROM pageviews;</td> <td>488.5 ms</td> <td>44 ms</td> <td>90.99%</td> </tr> </tbody> </table> <h2>How do I enable Rosetta on Docker?</h2> <p>Easy peasy, simply:</p> <ol> <li>Update to latest Docker</li> <li>Go into Settings -> Features in development</li> <li>Tick the “Use Rosetta for x86/amd64 emulation on Apple Silicon</li> </ol> <h2>Additional notes</h2> <ol> <li>Before enabling Rosetta, the Nodes page in SingleStore studio didn’t work and always showed 0% for both CPU and memory. With Rosetta, it works perfectly.</li> <li>Performing things such as INSERT INTO pageviews SELECT * FROM pageviews is so much faster when Rosetta is enabled</li> <li>Member of my SingleStore course, <a href="">Franco Gilio</a>, profiled and saw huge improvements too. His Laravel Nova searches went from 664ms to 166ms. Incredible.</li> </ol> <h2>Not bad</h2> <p>Overall, I’m very impressed by speed improvements. And Docker battery jokes aside, my battery life seems to be doing better. These benchmarks were tested by be manually, and they’re obviously not relevant for production, but I thought they’d be useful to do. I hope this post was helpful. And if you’re not in our community, make sure you check out <a href="">SingleStore for Laravel</a>, to learn how to use the fastest database in the world with Laravel.</p> How to have multiple unique columns in SingleStore 2023-01-04T00:00:00+00:00 2023-01-04T00:00:00+00:00 Jack Ellis, Fathom Analytics Jan 4, 2023 PT - <p>You've used MySQL for many years and have multiple unique keys set up on your tables. Then your data grows, you say Goodbye to MySQL, and you move to SingleStore. But hold on a minute; SingleStore only allows you to use one unique key per table. And if you're using an auto-incrementing ID, which most of us are in the Laravel world, your one unique key is gone.</p> <h2>The simple solution</h2> <p>Before I get into this, I will give all credit to <a href="">Carl Sverre</a>. I told him how I was enforcing uniqueness via atomic locks, and he knew of a better way. Let's get to it.</p> <h3>Step 1: Modify your users table</h3> <p>Our users table is going to be the create_users_table migration with one modification. We have removed the unique() method on the email field, so the table won't enforce it.</p> <pre><x-torchlight-code language="php" theme="one-dark-pro"> Schema::create('users', function (Blueprint $table) { $table->id(); $table->string('name'); $table->string('email'); $table->timestamp('email_verified_at')->nullable(); $table->string('password'); $table->rememberToken(); $table->timestamps(); }); </x-torchlight-code></pre> <h3>Step 2: Create your email_users table</h3> <p>So this is where the beauty comes in. We can't enforce email uniqueness on the users table, but we can enforce it on another table. Stay with me, folks.</p> <pre><x-torchlight-code language="php" theme="one-dark-pro"> Schema::create('users_emails', function (Blueprint $table) { $table->string('email'); $table->primary('email'); }); </x-torchlight-code></pre> <h3>Step 3. Creating a user</h3> <p>We now have two tables involved when creating a user. To keep this simple, let's imagine we have a static function called registerUser somewhere in our application. This is what we might do:</p> <pre><x-torchlight-code language="php" theme="one-dark-pro"> function registerUser($email, $password) { try { DB::transaction(function() use ($email, $password) { // We create our user as usual User::forceCreate([ 'email' => $email, 'password' => Hash::make($password) ]); // Now, here's the beautiful part // If the email already exists in this table // it will throw an exception, enforcing uniqueness DB::table('users_emails') ->insert(['email' => $email]); }); } catch (\Exception $e) { // Handle the exception } } </x-torchlight-code></pre> <h3>Step 4: Editing a user</h3> <p>How easy was that? And now, let me show you what we'd do when editing a user.</p> <pre><x-torchlight-code language="php" theme="one-dark-pro"> function changeEmail($userId, $oldEmail, $newEmail) { try { DB::transaction(function() use ($userId, $oldEmail, $newEmail) { // Modify the email in the users table User::where('id', $userId) ->update(['email' => $newEmail]); // We have to delete the email because the primary key is immutable DB::table('users_emails') ->where('email', $oldEmail) ->delete(); // And then we insert the new email of course DB::table('users_emails') ->insert(['email' => $newEmail]); }); } catch (\Exception $e) { // Handle the exception } } </x-torchlight-code></pre> <p>How epic is that?</p> <h3>Step 5: Deleting a user</h3> <p>And now deleting a user is pretty similar to editing a user.</p> <pre><x-torchlight-code language="php" theme="one-dark-pro"> function deleteUser(User $user) { try { DB::transaction(function() use ($user) { // Delete the users email DB::table('users_emails') ->where('email', $user->email) ->delete(); // Goodbye my lover, goodbye my friend $user->delete(); }); } catch (\Exception $e) { // Handle the exception. You wouldn't get a "unique" error here though. } } </x-torchlight-code></pre> <h2>How cool is that?</h2> <p>My previous recommendation was to use atomic locks to enforce uniqueness, which is acceptable, but this is better. With our transaction approach, there are fewer queries, and things are faster. And uniqueness is enforced by the database.</p> <p>You have duplicate data, but who cares? Databases are about trade-offs. Technology is about trade-offs. Life is about trade-offs. Throw in 10,000,000 email addresses, but you have the database enforcing uniqueness thanks to your transactions? I'll take that any day.</p> Should I use Laravel Vapor? 2020-03-25T00:00:00+00:00 2020-03-25T00:00:00+00:00 Jack Ellis, Fathom Analytics Mar 25, 2020 PT - <p>A lot of people have heard of <a href="">Laravel Vapor</a>, and know that it's for Serverless Laravel deployment, but they aren't too sure what it does.</p> <p>Laravel Vapor was built by <a href="">Taylor Otwell</a> and the Laravel team and is a serverless deployment platform for Laravel applications. It effectively marries our code to underlying AWS infrastructure with very little effort from us, which is great. A lot of people have avoided AWS, because there’s too much to learn, but we now have an easy route in.</p> <p>The beauty of Vapor is that we can control AWS resources through its beautiful user interface. This means we always stay at the high level and, only occasionally, do we need to dive into AWS a little to play around with some configuration.</p> <p>Vapor, to me, is a step up from Heroku. For those of you who haven’t used Heroku, it’s a beautiful GUI / platform that lets you provision dynos (servers), and attach add-ons, without doing any configuration yourself. But it’s expensive and they limit you to rigid tiers with certain things such as Redis / Postgres / MySQL, meaning it costs more when you need a small upgrade. Instead of allowing you to upgrade to the next AWS tier, they limit the tiers to force you into paying more. Comparatively, with Vapor, you get access to every single sizing option that AWS offers.</p> <p>Laravel Vapor is the future of Laravel applications. With Vapor, they charge a fixed price of $39 / month and you get AWS prices, with zero markups, for everything it provisions. No markup, how wonderful. It ends up being MUCH cheaper for a lot of use cases.</p> <p>But that’s the technical side of things, what is Vapor actually for? Why would I use Vapor instead of DigitalOcean?</p> <h2>Why should I use Vapor</h2> <h3>You don’t want to spend your time managing servers</h3> <p>You don’t want the updates, CPU / RAM monitoring, server reboots, corrupted hardware, etc. You value your health and enjoy your sleep. Being on-call 24/7 is stressful and exhausting. 2 AM downtime notifications and whatever else comes with managing servers. Sure, it doesn’t happen all the time, but you (or a member of your team) need to always be available to put out fires. Using Vapor means you can utilize a series of AWS services that are all fully managed for you, meaning the AWS team gets the 2 AM calls. Your time is more valuable spent building your application.</p> <h3>You don’t want to hire outside help to manage your servers</h3> <p>When we used a multi-VPS set-up for <a href="">Fathom Analytics</a>, we were quoted $1,000 / month for DevOps support. We’d have someone who gets notified if our servers have issues and they’ll respond as soon as possible. That is expensive. We only had around 3-4 servers.</p> <h3>You have unknown or varying server load</h3> <p>Vapor is perfect for unknown or varying workloads, since you could process 400 million requests at 6 random times throughout the month, and then 1 million every other day, and you wouldn’t have to over-provision. You only pay for the requests you use.</p> <h3>You run a critical service and want as little downtime as possible</h3> <p>Because Vapor uses AWS Lambda, it means thousands of different servers are being used to process your web requests. This means that you won’t run into these server-related downtime issues on AWS Lambda. The same applies to the queue service (SQS) and the database. Sure, the database service can have downtime, but you can mitigate that very easily. We’ll be covering that in the course.</p> <h3>You like to tweak settings &amp; experiment</h3> <p>In Vapor, scaling your cache or database is as simple as clicking Scale and then entering your desired size. This means you can tweak to your desire and monitor the outcome.</p> <h3>You want a platform that was made specifically to host Laravel applications</h3> <p>Laravel Vapor was built by Taylor Otwell and the Laravel team. Why is this important? Because they are Laravel experts, and the platform was built specifically for the framework your application uses</p> <h2>When you shouldn’t use Vapor</h2> <p>Now before we close up, I want to balance out this argument and keep it objective. If a hosting bill over $20 / month concerns you, or the $39 / month price tag for Vapor seems expensive, I’m going to go ahead and say Vapor is probably not for you.</p> <p>Vapor is for people who hear "$39 for zero server maintenance" and reply immediately with "Sign me up now". When I first pitched Vapor to a client of mine and explained that you can set up applications on-demand, have zero server maintenance and high availability, they wanted to move right away. Businesses understand the human cost of server maintenance, and it’s thousands of dollars. If they can pay such a small amount to solve this problem, they would fly someone to Taylor Otwell’s house and hand him $3,900 for 10 years of Vapor, and it would be cheaper to do all of that than to pay a developer for a week of server set-up and then the infinite server maintenance that comes with that.</p> <p>And that's my summary on Vapor and whether you should use it. If you're curious, I hear that someone has a course on how to use Laravel Vapor.</p> What is Serverless? 2020-03-22T00:00:00+00:00 2020-03-22T00:00:00+00:00 Jack Ellis, Fathom Analytics Mar 22, 2020 PT - <p>Before we get ourselves too confused with definitions, let us address the elephant in the room. Yes, Serverless infrastructure uses servers.</p> <p>Serverless just means that you don’t have to worry about software updates, security patches, monitoring, provisioning capacity, load balancing, uptime, and all other maintenance.</p> <p>So how does serverless differ from what we’ve been doing for the majority of our careers? The easiest way to explain things is for me to compare a “traditional” set-up against a serverless set-up for a basic web application.</p> <p>Let’s imagine we’ve got a basic Laravel application, and let’s imagine that it’s a chatroom. The features we have are register, login, logout, post message and perhaps some email notifications when you are tagged.</p> <h2>Traditional</h2> <p>With your typical, old school web application, you’d set-up LAMP or LEMP on a server. That server may be a dedicated server or a VPS. If you’re using shared hosting, this will all be set-up for you. But that’s the minority of cases when it comes to Laravel applications.</p> <p>For this example, let’s imagine you have some sort of VPS from DigitalOcean or a dedicated server.</p> <p>So as I said, the traditional set-up would be having it all on one server.</p> <ul> <li>Linux</li> <li>Apache</li> <li>MySQL</li> <li>PHP</li> </ul> <p>So that’s how the traditional set-up looks. It’s the set-up that a lot of us came up using, and may still use.</p> <p>This isn’t good though because we have all of our eggs in one basket. If that dedicated server has some software or hardware issue, we could lose data and be offline for hours. Imagine if we had to restore a 1TB backup to our database because our database was unrecoverable from an issue? It would be a headache. What about if your MySQL instance required more CPU or RAM, you might have to take everything offline to perform the upgrade. And that’s one of the reasons why this approach isn’t as common anymore.</p> <h2>Broken-out Traditional</h2> <p>With the ‘Broken out Traditional’ approach, we split our application up into pieces. Instead of having the database and the webserver together, we now have 2 servers. 1 is responsible for MySQL and the other is responsible for the webserver. This is better because it means we can control each server separately. We can switch out the MySQL server parameters easily in the event of maintenance/server upgrades without taking the web server offline.</p> <p>So this is certainly a step up from traditional. There are obviously many other combinations where you can set-up your infrastructure in a custom way to add redundancy to your application, stepping up each time, but I’m not going to dive into them because we’ll be here for weeks. The downside of this is that you increase complexity/management as you scale-out. As the late Biggie Smalls once said, <a href="">Mo Servers Mo Problems</a>.</p> <h2>Serverless</h2> <p>We will jump right past the infinite other configurations and onto a serverless set-up (done with Laravel Vapor) because that’s what we’re going to be talking about in the course.</p> <p>With serverless, we still use servers but you don’t manage them. Instead, we utilize services like:</p> <ul> <li>Lambda (HTTP Layer / Commands / Queue workers)</li> <li>RDS (Highly available database)</li> <li>SQS (Queue)</li> <li>SES (Email)</li> <li>ElastiCache / DynamoDB (Cache)</li> <li>Cloudfront (Content Delivery Network)</li> <li>and more.</li> </ul> <p>These are the AWS services that Vapor utilizes. Because they are services, we can provision them with certain settings, without having to worry about underlying server tweaks, updates, and management. Instead, a team of experts keeps the services online for us. There are thousands of servers running these services but, as far as we’re concerned, we’re serverless because we don’t care about the servers.</p> <p>So that’s the difference. By choosing serverless deployment, we choose to focus on our applications and not on infrastructure. We choose to pay $39 each month for Laravel Vapor instead of $1,000 (or more) each month for someone to manage our servers.</p> <p>Some people like DevOps. If that’s you, serverless might not be for you. But for those of us who want to spend our time writing code and making our applications better, serverless is the best thing since sliced bread.</p> <p>Sure, we still have to manage services occasionally, but that’s rare and easy. We stay at a high level when we use these services, and never have to dive into server problems.</p> <p>We choose sleep instead of 2 AM wake-up calls from uptime monitoring solutions telling us our website is down. We choose a managed database service where database storage is decoupled from the main computation. And all the other serverless goodies.</p> <p>Serverless Laravel isn’t a trend, it’s the logical next step. Less time on infrastructure and more time building our applications.</p> Improve Laravel Vapor response time with prewarming 2020-03-22T00:00:00+00:00 2020-03-22T00:00:00+00:00 Jack Ellis, Fathom Analytics Mar 22, 2020 PT - <p>For those of you who aren't aware, prewarming is used in Vapor to reduce cold starts. When you deploy your application, there won't be any Lambda containers that are "warm" and ready to respond to requests by default. This leads to guaranteed cold starts, which can add up to 2 seconds to response times, which is not ideal. Prewarming means that Vapor will send requests to a specified amount of containers on deployment to warm them up and will continue to send requests every 5 minutes to keep them warm. So prewarming is a great thing and you should always do it.</p> <p>For those who already prewarm, you may be thinking "Am I prewarming enough?" or "Am I prewarming too much?". In addition to those concerns, you also might have heard the argument of "Why don't you just use a standard server since you won't get any cold starts?" etc. Well, I have news for you, and it's good news.</p> <p>Let's consider a $5 / month DigitalOcean droplet. If it always online with 1GB of memory ready to go. Let's say that each Laravel request is 25MB. That means, in theory, if all memory is assigned to PHP requests (heh), the Droplet can handle 40 concurrent requests.</p> <p>In Vapor, if you want to match that concurrency availability you would need to have 40 lambda containers warmed up. And guess what, it's not expensive to do.</p> <p>Vapor pings your containers every 5 minutes when you use prewarming. There are ~730 hours in a month. 730 x 60 = 43,800 minutes. 43,800 minutes / 5 = 8,760. So there are 8,760 requests per month sent by Vapor per container for prewarming.</p> <p>Let's use a 512MB Lambda container (far more than the 25MB each DigitalOcean request has) and consider warming 40 containers.</p> <h2>Post-Deployment Warming</h2> <p>Let's be pessimistic and assume that the post-deployment warm is 2 seconds and that we are responsible for the cost of the start-up. AWS doesn't charge us for this but I'm not sure who incurs the cost: Vapor or us. When you don't know the answer to something like this, and can't find it documented, you should always assume the worst case pricing scenario: So let's pretend we will be billed 2 seconds per cold start:</p> <ul> <li>512MB for 2 seconds = $0.00001666</li> <li>Request = $0.0000002</li> <li>Data Transfer costs are $0.01 per GB, so they're negligible</li> <li>Total cost per deployment for 1 container to be warmed: $0.00001686</li> <li>Total cost per deployment for 40 containers to be warmed: $0.0006744</li> </ul> <p>So each deployment will cost us $0.0006744. Let's say we deploy 500 times in a month (Freek Van der Herten productivity). That'd cost us 33 cents in a month.</p> <h2>Warming every 5 minutes</h2> <p>These warms are keeping your already-warm containers warm, so you won't have that 2 second cold start.</p> <ul> <li>512MB for 200ms (estimate) = $0.000001666</li> <li>Request = $0.0000002</li> <li>Data Transfer costs of $0.01 per GB are negligible</li> <li>Total cost per 5 minute "ping" = $0.000001866</li> <li>Total cost per 5 minute "ping" for 40 containers = $0.00007464</li> <li>Total cost for 40 containers to be warmed every 5 minutes for a month = $0.6538464</li> </ul> <p>Isn't that incredible? So what does that mean in real talk? Let's assume an API with 200ms response times.</p> <ul> <li>$0.6538464 / month = ready for 200 req/sec concurrency</li> <li>$1.3076928 / month = ready for 400 req/sec concurrency</li> <li>$2.6153856 / month = ready for 800 req/sec concurrency</li> <li>$5.2307712 / month = ready for 1.6k req/sec concurrency</li> </ul> <p>And sure, you will still have to pay for the actual request processing but the point is that you are fully warmed up and ready to handle incredible capacity for a ridiculously low price. Do you really think that someone who is handling 1.6k requests per second (5.7M req per hour) can't afford $5.30 to warm their containers up? That's the price of a coffee.</p> <p>I'm confident that my math won't be exact but it's a solid estimate based on <a href="">AWS' published pricing</a>. I would advise running over the numbers for your own specific use case. All you need to do is use the AWS Lambda Pricing page to get the figures you need. Decide what kind of warming you want and go from there.</p> <p>My main point here? Do not be afraid of setting your warming setting nice and high... because you're worth it :)</p> <p>The Vapor team are actively working on improvements to their cold-start handling. It's not perfect, you'll still have cases where a few requests will encounter a delay (if they hit during the exact moment prewarming is occuring), but prewarming helps. They introduced provisioned concurrency (1 week after Lambda announced it), which is great, but it's way too expensive . The vapor team are actively working on new ideas to improve cold starts, and they move very fast.</p>