Coverage Summary for Class: SmoothRateLimiter (com.google.common.util.concurrent)
| Class | Method, % | Line, % |
|---|---|---|
| SmoothRateLimiter | 0% (0/7) | 0% (0/22) |
| SmoothRateLimiter$SmoothBursty | 0% (0/4) | 0% (0/12) |
| SmoothRateLimiter$SmoothWarmingUp | 0% (0/5) | 0% (0/27) |
| Total | 0% (0/16) | 0% (0/61) |
1 /* 2 * Copyright (C) 2012 The Guava Authors 3 * 4 * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except 5 * in compliance with the License. You may obtain a copy of the License at 6 * 7 * http://www.apache.org/licenses/LICENSE-2.0 8 * 9 * Unless required by applicable law or agreed to in writing, software distributed under the License 10 * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express 11 * or implied. See the License for the specific language governing permissions and limitations under 12 * the License. 13 */ 14 15 package com.google.common.util.concurrent; 16 17 import static java.lang.Math.min; 18 import static java.util.concurrent.TimeUnit.SECONDS; 19 20 import com.google.common.annotations.GwtIncompatible; 21 import com.google.common.math.LongMath; 22 import java.util.concurrent.TimeUnit; 23 24 @GwtIncompatible 25 @ElementTypesAreNonnullByDefault 26 abstract class SmoothRateLimiter extends RateLimiter { 27 /* 28 * How is the RateLimiter designed, and why? 29 * 30 * The primary feature of a RateLimiter is its "stable rate", the maximum rate that it should 31 * allow in normal conditions. This is enforced by "throttling" incoming requests as needed. For 32 * example, we could compute the appropriate throttle time for an incoming request, and make the 33 * calling thread wait for that time. 34 * 35 * The simplest way to maintain a rate of QPS is to keep the timestamp of the last granted 36 * request, and ensure that (1/QPS) seconds have elapsed since then. For example, for a rate of 37 * QPS=5 (5 tokens per second), if we ensure that a request isn't granted earlier than 200ms after 38 * the last one, then we achieve the intended rate. If a request comes and the last request was 39 * granted only 100ms ago, then we wait for another 100ms. At this rate, serving 15 fresh permits 40 * (i.e. for an acquire(15) request) naturally takes 3 seconds. 41 * 42 * It is important to realize that such a RateLimiter has a very superficial memory of the past: 43 * it only remembers the last request. What if the RateLimiter was unused for a long period of 44 * time, then a request arrived and was immediately granted? This RateLimiter would immediately 45 * forget about that past underutilization. This may result in either underutilization or 46 * overflow, depending on the real world consequences of not using the expected rate. 47 * 48 * Past underutilization could mean that excess resources are available. Then, the RateLimiter 49 * should speed up for a while, to take advantage of these resources. This is important when the 50 * rate is applied to networking (limiting bandwidth), where past underutilization typically 51 * translates to "almost empty buffers", which can be filled immediately. 52 * 53 * On the other hand, past underutilization could mean that "the server responsible for handling 54 * the request has become less ready for future requests", i.e. its caches become stale, and 55 * requests become more likely to trigger expensive operations (a more extreme case of this 56 * example is when a server has just booted, and it is mostly busy with getting itself up to 57 * speed). 58 * 59 * To deal with such scenarios, we add an extra dimension, that of "past underutilization", 60 * modeled by "storedPermits" variable. This variable is zero when there is no underutilization, 61 * and it can grow up to maxStoredPermits, for sufficiently large underutilization. So, the 62 * requested permits, by an invocation acquire(permits), are served from: 63 * 64 * - stored permits (if available) 65 * 66 * - fresh permits (for any remaining permits) 67 * 68 * How this works is best explained with an example: 69 * 70 * For a RateLimiter that produces 1 token per second, every second that goes by with the 71 * RateLimiter being unused, we increase storedPermits by 1. Say we leave the RateLimiter unused 72 * for 10 seconds (i.e., we expected a request at time X, but we are at time X + 10 seconds before 73 * a request actually arrives; this is also related to the point made in the last paragraph), thus 74 * storedPermits becomes 10.0 (assuming maxStoredPermits >= 10.0). At that point, a request of 75 * acquire(3) arrives. We serve this request out of storedPermits, and reduce that to 7.0 (how 76 * this is translated to throttling time is discussed later). Immediately after, assume that an 77 * acquire(10) request arriving. We serve the request partly from storedPermits, using all the 78 * remaining 7.0 permits, and the remaining 3.0, we serve them by fresh permits produced by the 79 * rate limiter. 80 * 81 * We already know how much time it takes to serve 3 fresh permits: if the rate is 82 * "1 token per second", then this will take 3 seconds. But what does it mean to serve 7 stored 83 * permits? As explained above, there is no unique answer. If we are primarily interested to deal 84 * with underutilization, then we want stored permits to be given out /faster/ than fresh ones, 85 * because underutilization = free resources for the taking. If we are primarily interested to 86 * deal with overflow, then stored permits could be given out /slower/ than fresh ones. Thus, we 87 * require a (different in each case) function that translates storedPermits to throttling time. 88 * 89 * This role is played by storedPermitsToWaitTime(double storedPermits, double permitsToTake). The 90 * underlying model is a continuous function mapping storedPermits (from 0.0 to maxStoredPermits) 91 * onto the 1/rate (i.e. intervals) that is effective at the given storedPermits. "storedPermits" 92 * essentially measure unused time; we spend unused time buying/storing permits. Rate is 93 * "permits / time", thus "1 / rate = time / permits". Thus, "1/rate" (time / permits) times 94 * "permits" gives time, i.e., integrals on this function (which is what storedPermitsToWaitTime() 95 * computes) correspond to minimum intervals between subsequent requests, for the specified number 96 * of requested permits. 97 * 98 * Here is an example of storedPermitsToWaitTime: If storedPermits == 10.0, and we want 3 permits, 99 * we take them from storedPermits, reducing them to 7.0, and compute the throttling for these as 100 * a call to storedPermitsToWaitTime(storedPermits = 10.0, permitsToTake = 3.0), which will 101 * evaluate the integral of the function from 7.0 to 10.0. 102 * 103 * Using integrals guarantees that the effect of a single acquire(3) is equivalent to { 104 * acquire(1); acquire(1); acquire(1); }, or { acquire(2); acquire(1); }, etc, since the integral 105 * of the function in [7.0, 10.0] is equivalent to the sum of the integrals of [7.0, 8.0], [8.0, 106 * 9.0], [9.0, 10.0] (and so on), no matter what the function is. This guarantees that we handle 107 * correctly requests of varying weight (permits), /no matter/ what the actual function is - so we 108 * can tweak the latter freely. (The only requirement, obviously, is that we can compute its 109 * integrals). 110 * 111 * Note well that if, for this function, we chose a horizontal line, at height of exactly (1/QPS), 112 * then the effect of the function is non-existent: we serve storedPermits at exactly the same 113 * cost as fresh ones (1/QPS is the cost for each). We use this trick later. 114 * 115 * If we pick a function that goes /below/ that horizontal line, it means that we reduce the area 116 * of the function, thus time. Thus, the RateLimiter becomes /faster/ after a period of 117 * underutilization. If, on the other hand, we pick a function that goes /above/ that horizontal 118 * line, then it means that the area (time) is increased, thus storedPermits are more costly than 119 * fresh permits, thus the RateLimiter becomes /slower/ after a period of underutilization. 120 * 121 * Last, but not least: consider a RateLimiter with rate of 1 permit per second, currently 122 * completely unused, and an expensive acquire(100) request comes. It would be nonsensical to just 123 * wait for 100 seconds, and /then/ start the actual task. Why wait without doing anything? A much 124 * better approach is to /allow/ the request right away (as if it was an acquire(1) request 125 * instead), and postpone /subsequent/ requests as needed. In this version, we allow starting the 126 * task immediately, and postpone by 100 seconds future requests, thus we allow for work to get 127 * done in the meantime instead of waiting idly. 128 * 129 * This has important consequences: it means that the RateLimiter doesn't remember the time of the 130 * _last_ request, but it remembers the (expected) time of the _next_ request. This also enables 131 * us to tell immediately (see tryAcquire(timeout)) whether a particular timeout is enough to get 132 * us to the point of the next scheduling time, since we always maintain that. And what we mean by 133 * "an unused RateLimiter" is also defined by that notion: when we observe that the 134 * "expected arrival time of the next request" is actually in the past, then the difference (now - 135 * past) is the amount of time that the RateLimiter was formally unused, and it is that amount of 136 * time which we translate to storedPermits. (We increase storedPermits with the amount of permits 137 * that would have been produced in that idle time). So, if rate == 1 permit per second, and 138 * arrivals come exactly one second after the previous, then storedPermits is _never_ increased -- 139 * we would only increase it for arrivals _later_ than the expected one second. 140 */ 141 142 /** 143 * This implements the following function where coldInterval = coldFactor * stableInterval. 144 * 145 * <pre> 146 * ^ throttling 147 * | 148 * cold + / 149 * interval | /. 150 * | / . 151 * | / . ? "warmup period" is the area of the trapezoid between 152 * | / . thresholdPermits and maxPermits 153 * | / . 154 * | / . 155 * | / . 156 * stable +----------/ WARM . 157 * interval | . UP . 158 * | . PERIOD. 159 * | . . 160 * 0 +----------+-------+--------------? storedPermits 161 * 0 thresholdPermits maxPermits 162 * </pre> 163 * 164 * Before going into the details of this particular function, let's keep in mind the basics: 165 * 166 * <ol> 167 * <li>The state of the RateLimiter (storedPermits) is a vertical line in this figure. 168 * <li>When the RateLimiter is not used, this goes right (up to maxPermits) 169 * <li>When the RateLimiter is used, this goes left (down to zero), since if we have 170 * storedPermits, we serve from those first 171 * <li>When _unused_, we go right at a constant rate! The rate at which we move to the right is 172 * chosen as maxPermits / warmupPeriod. This ensures that the time it takes to go from 0 to 173 * maxPermits is equal to warmupPeriod. 174 * <li>When _used_, the time it takes, as explained in the introductory class note, is equal to 175 * the integral of our function, between X permits and X-K permits, assuming we want to 176 * spend K saved permits. 177 * </ol> 178 * 179 * <p>In summary, the time it takes to move to the left (spend K permits), is equal to the area of 180 * the function of width == K. 181 * 182 * <p>Assuming we have saturated demand, the time to go from maxPermits to thresholdPermits is 183 * equal to warmupPeriod. And the time to go from thresholdPermits to 0 is warmupPeriod/2. (The 184 * reason that this is warmupPeriod/2 is to maintain the behavior of the original implementation 185 * where coldFactor was hard coded as 3.) 186 * 187 * <p>It remains to calculate thresholdsPermits and maxPermits. 188 * 189 * <ul> 190 * <li>The time to go from thresholdPermits to 0 is equal to the integral of the function 191 * between 0 and thresholdPermits. This is thresholdPermits * stableIntervals. By (5) it is 192 * also equal to warmupPeriod/2. Therefore 193 * <blockquote> 194 * thresholdPermits = 0.5 * warmupPeriod / stableInterval 195 * </blockquote> 196 * <li>The time to go from maxPermits to thresholdPermits is equal to the integral of the 197 * function between thresholdPermits and maxPermits. This is the area of the pictured 198 * trapezoid, and it is equal to 0.5 * (stableInterval + coldInterval) * (maxPermits - 199 * thresholdPermits). It is also equal to warmupPeriod, so 200 * <blockquote> 201 * maxPermits = thresholdPermits + 2 * warmupPeriod / (stableInterval + coldInterval) 202 * </blockquote> 203 * </ul> 204 */ 205 static final class SmoothWarmingUp extends SmoothRateLimiter { 206 private final long warmupPeriodMicros; 207 /** 208 * The slope of the line from the stable interval (when permits == 0), to the cold interval 209 * (when permits == maxPermits) 210 */ 211 private double slope; 212 213 private double thresholdPermits; 214 private double coldFactor; 215 216 SmoothWarmingUp( 217 SleepingStopwatch stopwatch, long warmupPeriod, TimeUnit timeUnit, double coldFactor) { 218 super(stopwatch); 219 this.warmupPeriodMicros = timeUnit.toMicros(warmupPeriod); 220 this.coldFactor = coldFactor; 221 } 222 223 @Override 224 void doSetRate(double permitsPerSecond, double stableIntervalMicros) { 225 double oldMaxPermits = maxPermits; 226 double coldIntervalMicros = stableIntervalMicros * coldFactor; 227 thresholdPermits = 0.5 * warmupPeriodMicros / stableIntervalMicros; 228 maxPermits = 229 thresholdPermits + 2.0 * warmupPeriodMicros / (stableIntervalMicros + coldIntervalMicros); 230 slope = (coldIntervalMicros - stableIntervalMicros) / (maxPermits - thresholdPermits); 231 if (oldMaxPermits == Double.POSITIVE_INFINITY) { 232 // if we don't special-case this, we would get storedPermits == NaN, below 233 storedPermits = 0.0; 234 } else { 235 storedPermits = 236 (oldMaxPermits == 0.0) 237 ? maxPermits // initial state is cold 238 : storedPermits * maxPermits / oldMaxPermits; 239 } 240 } 241 242 @Override 243 long storedPermitsToWaitTime(double storedPermits, double permitsToTake) { 244 double availablePermitsAboveThreshold = storedPermits - thresholdPermits; 245 long micros = 0; 246 // measuring the integral on the right part of the function (the climbing line) 247 if (availablePermitsAboveThreshold > 0.0) { 248 double permitsAboveThresholdToTake = min(availablePermitsAboveThreshold, permitsToTake); 249 // TODO(cpovirk): Figure out a good name for this variable. 250 double length = 251 permitsToTime(availablePermitsAboveThreshold) 252 + permitsToTime(availablePermitsAboveThreshold - permitsAboveThresholdToTake); 253 micros = (long) (permitsAboveThresholdToTake * length / 2.0); 254 permitsToTake -= permitsAboveThresholdToTake; 255 } 256 // measuring the integral on the left part of the function (the horizontal line) 257 micros += (long) (stableIntervalMicros * permitsToTake); 258 return micros; 259 } 260 261 private double permitsToTime(double permits) { 262 return stableIntervalMicros + permits * slope; 263 } 264 265 @Override 266 double coolDownIntervalMicros() { 267 return warmupPeriodMicros / maxPermits; 268 } 269 } 270 271 /** 272 * This implements a "bursty" RateLimiter, where storedPermits are translated to zero throttling. 273 * The maximum number of permits that can be saved (when the RateLimiter is unused) is defined in 274 * terms of time, in this sense: if a RateLimiter is 2qps, and this time is specified as 10 275 * seconds, we can save up to 2 * 10 = 20 permits. 276 */ 277 static final class SmoothBursty extends SmoothRateLimiter { 278 /** The work (permits) of how many seconds can be saved up if this RateLimiter is unused? */ 279 final double maxBurstSeconds; 280 281 SmoothBursty(SleepingStopwatch stopwatch, double maxBurstSeconds) { 282 super(stopwatch); 283 this.maxBurstSeconds = maxBurstSeconds; 284 } 285 286 @Override 287 void doSetRate(double permitsPerSecond, double stableIntervalMicros) { 288 double oldMaxPermits = this.maxPermits; 289 maxPermits = maxBurstSeconds * permitsPerSecond; 290 if (oldMaxPermits == Double.POSITIVE_INFINITY) { 291 // if we don't special-case this, we would get storedPermits == NaN, below 292 storedPermits = maxPermits; 293 } else { 294 storedPermits = 295 (oldMaxPermits == 0.0) 296 ? 0.0 // initial state 297 : storedPermits * maxPermits / oldMaxPermits; 298 } 299 } 300 301 @Override 302 long storedPermitsToWaitTime(double storedPermits, double permitsToTake) { 303 return 0L; 304 } 305 306 @Override 307 double coolDownIntervalMicros() { 308 return stableIntervalMicros; 309 } 310 } 311 312 /** The currently stored permits. */ 313 double storedPermits; 314 315 /** The maximum number of stored permits. */ 316 double maxPermits; 317 318 /** 319 * The interval between two unit requests, at our stable rate. E.g., a stable rate of 5 permits 320 * per second has a stable interval of 200ms. 321 */ 322 double stableIntervalMicros; 323 324 /** 325 * The time when the next request (no matter its size) will be granted. After granting a request, 326 * this is pushed further in the future. Large requests push this further than small requests. 327 */ 328 private long nextFreeTicketMicros = 0L; // could be either in the past or future 329 330 private SmoothRateLimiter(SleepingStopwatch stopwatch) { 331 super(stopwatch); 332 } 333 334 @Override 335 final void doSetRate(double permitsPerSecond, long nowMicros) { 336 resync(nowMicros); 337 double stableIntervalMicros = SECONDS.toMicros(1L) / permitsPerSecond; 338 this.stableIntervalMicros = stableIntervalMicros; 339 doSetRate(permitsPerSecond, stableIntervalMicros); 340 } 341 342 abstract void doSetRate(double permitsPerSecond, double stableIntervalMicros); 343 344 @Override 345 final double doGetRate() { 346 return SECONDS.toMicros(1L) / stableIntervalMicros; 347 } 348 349 @Override 350 final long queryEarliestAvailable(long nowMicros) { 351 return nextFreeTicketMicros; 352 } 353 354 @Override 355 final long reserveEarliestAvailable(int requiredPermits, long nowMicros) { 356 resync(nowMicros); 357 long returnValue = nextFreeTicketMicros; 358 double storedPermitsToSpend = min(requiredPermits, this.storedPermits); 359 double freshPermits = requiredPermits - storedPermitsToSpend; 360 long waitMicros = 361 storedPermitsToWaitTime(this.storedPermits, storedPermitsToSpend) 362 + (long) (freshPermits * stableIntervalMicros); 363 364 this.nextFreeTicketMicros = LongMath.saturatedAdd(nextFreeTicketMicros, waitMicros); 365 this.storedPermits -= storedPermitsToSpend; 366 return returnValue; 367 } 368 369 /** 370 * Translates a specified portion of our currently stored permits which we want to spend/acquire, 371 * into a throttling time. Conceptually, this evaluates the integral of the underlying function we 372 * use, for the range of [(storedPermits - permitsToTake), storedPermits]. 373 * 374 * <p>This always holds: {@code 0 <= permitsToTake <= storedPermits} 375 */ 376 abstract long storedPermitsToWaitTime(double storedPermits, double permitsToTake); 377 378 /** 379 * Returns the number of microseconds during cool down that we have to wait to get a new permit. 380 */ 381 abstract double coolDownIntervalMicros(); 382 383 /** Updates {@code storedPermits} and {@code nextFreeTicketMicros} based on the current time. */ 384 void resync(long nowMicros) { 385 // if nextFreeTicket is in the past, resync to now 386 if (nowMicros > nextFreeTicketMicros) { 387 double newPermits = (nowMicros - nextFreeTicketMicros) / coolDownIntervalMicros(); 388 storedPermits = min(maxPermits, storedPermits + newPermits); 389 nextFreeTicketMicros = nowMicros; 390 } 391 } 392 }