![]() ![]() Minimize maximum sleep time: Rate throttling involves waiting, and no one wants to wait for too long.Minimize average retry rate: The fewer failed API requests, the better.Here are the goals I ended up coming up with: Instead of asking "what would make my algorithm better," I asked, "how would I know a change to my algorithm is better" and then worked to develop some ways to quantify what "better" meant. That's when I decided to throw it all away and start from first principles. The making of an algorithmĪt this point, I could explain the approach I had taken to build an algorithm, but I had no way to quantify the "goodness" of my algorithm. The tests I had could prevent them from breaking some expectations, but there was nothing to help them make a better algorithm. Imagine someone in the future wants to make a change to the algorithm, and I'm no longer here. While I had developed a good gut feel for what a "good" algorithm did and how it behaved, I had no way of solidifying that knowledge into something that others could run with. Unfortunately, the algorithm I developed from "watching some charts for a few hours" didn't make a whole lot of sense, and it was painfully apparent that it wasn't maintainable. Finally, the day came where we were ready to get rate throttling into the platform-api gem all we needed was a review. To help out I paired with a Heroku Engineer, Lola, we ended up making several PRs to a bunch of related projects, and that's its own story to tell. The next task was wiring it up to the platform-api gem. I used those two outputs to write what I thought was a pretty good rate throttling algorithm. I found my simulation took too long to run and so I added a mechanism to speed up the simulated time. I initially just output values to the CLI as the simulation ran, but found it challenging to make sense of them all, so I added charting. I would simulate the server's behavior, and then boot up several processes and threads and hit the simulated server with requests to observe the system's behavior. The solution that I came up with was to write a simulator in addition to tests. ![]() ![]() I also found that checking for quality "this rate throttle strategy is better than others" could not be checked quite as easily. I quickly realized that while testing the behavior "retries a request after a 429 response," it is easy to check. I initially started wanting to write tests for my rate throttling strategy. Making client throttling maintainableīefore we can get into what logic goes into a quality rate throttling algorithm, I want to talk about the process that I used as I think the journey is just as fascinating as the destination. Since the code needed to be contained entirely in the client library, it needed to be able to function without distributed coordination between multiple clients on multiple machines except for whatever information the Heroku API returned. That is, if the client makes 100 requests, and 10 of them are a 429 response that its retry rate is 10%. I made it a goal for the rate throttling client also to minimize its retry rate. A "simple" solution would be to add a retry to all requests when they see a 429, but that would effectively DDoS the API. I needed to write an algorithm that never errored as a result of a 429 response. If no tokens remain, further calls will return 429 Too Many Requests until more tokens become available. Tokens are added to the account pool at a rate of roughly 75 per minute (or 4500 per hour), up to a maximum of 4500. Each API call removes one token from the pool. Each account has a pool of request tokens that can hold at most 4500 tokens. The API limits the number of requests each user can make per hour to protect against abuse and buggy code. The Heroku API uses Genetic Cell Rate Algorithm (GCRA) as described by Brandur in this post on the server-side. If the term "rate throttling" is new to you, read Rate limiting, rate throttling, and how they work together That tweet spawned a discussion that generated a quest to add rate throttling logic to the platform-api gem that Heroku maintains for talking to its API in Ruby. You've got an exception generator with a remote timer.- Richard Schneeman □ Stay Inside June 12, 2019 If you provide an API client that doesn't include rate limiting, you don't really have an API client. When API requests are made one-after-the-other they'll quickly hit rate limits and when that happens: ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |