Resolved Feb 06 at 08:04pm UTC
We were able to locate the source of the problem on our end that was limiting our scaling under certain loads. We've deployed a fix and are now accepting all traffic. If there's a silver lining, it's that we fixed a long standing performance problem that reared itself sporadically, and now we know why. We're going to write up a complete postmortem soon. Thanks so much for your patience, and I sincerely apologize for the problems this outage has caused all your businesses over the last 36 hour...
7 previous updates
Resolved Jan 25 at 02:39am UTC
Everything looks to be back to normal. Please reach out to email@example.com if you encounter any issues, or if you have further questions about what occurred.
1 previous update
Scheduled Database Upgrade
Resolved Dec 19 at 03:00am UTC
We will be performing a scheduled upgrade of our Postgres datastores on December 19th 2023 at 3am UTC (i.e. the 18th @ 9pm CST). During the upgrade and changeover there may be up to 30 minutes of downtime.
Resolved Dec 18 at 05:00pm UTC
Today at 7am CST, we experienced a partial outage due to elevated API volume. We scaled up our infrastructure to accommodate the unexpected volume, but due to other factors, we continued to see elevated error rates. This was due to our event backlog filling up faster than we could process (due to multiple factors including webhook delivery failures), eventually leading to an out-of-memory failure on our Redis datastores, resulting in further failed requests. We resolved the webhook-related fa...