You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
gci-proxy has two very important knobs and two important trade-offs:
maxFraction: it defines the percentage of the genSize which is the upper bound of the shedding threshold. One the one hand, if it is too big, the GC might be triggered spuriously in some languages (e.g. Java and Node.JS), on the other hand, if it is too small, the GC would be triggered too often. In our experience, get as close as 80% (upper bound) is safe for G1GC and the GC of Node.JS.
maxSampleSize: determines the maximum number of requests processed before the next heap check. If it is too big, the amount of memory consumed might be too big, which then might trigger the GC or make the process run out of memory. If it is too small, it might incur in too much overhead. So far, gci-proxy is not targeted to super high throughput instances, so one check every 1024 requests seems good enough.
The thing we've just noticed is that those two knobs are inversely correlated. If we decrease the maxSampleSize (increasing the frequency of heap checks), is safe to get maxFraction closer to 80%. The contrary is also true, if the maxSampleSize increases (decrease the frequency of heap checks), the risk of getting a peak and consume too much memory increases, so it is better to decrease the maxFraction first.
As the overhead is a factor driving the choice of both factors, let's build an algorithm to update those knobs around it.
The text was updated successfully, but these errors were encountered:
gci-proxy has two very important knobs and two important trade-offs:
maxFraction
: it defines the percentage of the genSize which is the upper bound of the shedding threshold. One the one hand, if it is too big, the GC might be triggered spuriously in some languages (e.g. Java and Node.JS), on the other hand, if it is too small, the GC would be triggered too often. In our experience, get as close as 80% (upper bound) is safe for G1GC and the GC of Node.JS.maxSampleSize
: determines the maximum number of requests processed before the next heap check. If it is too big, the amount of memory consumed might be too big, which then might trigger the GC or make the process run out of memory. If it is too small, it might incur in too much overhead. So far, gci-proxy is not targeted to super high throughput instances, so one check every 1024 requests seems good enough.The thing we've just noticed is that those two knobs are inversely correlated. If we decrease the
maxSampleSize
(increasing the frequency of heap checks), is safe to getmaxFraction
closer to 80%. The contrary is also true, if themaxSampleSize
increases (decrease the frequency of heap checks), the risk of getting a peak and consume too much memory increases, so it is better to decrease themaxFraction
first.As the overhead is a factor driving the choice of both factors, let's build an algorithm to update those knobs around it.
The text was updated successfully, but these errors were encountered: