ℹ️ This is an optional upgrade of op-node.
❗ However, it is strongly recommended for low-throughput chains that have typical safe head increases more than 3h apart.
L1 cache size
This release of op-node adds a new optional flag --l1.cache-size
(env var OP_NODE_L1_CACHE_SIZE
) to configure the L1 cache size. If not set (=0), a default of 2/3 the sequencing window is used. Also, the default limit of 1000 got removed, which was too low to hold more than ~3h of L1 data.
It is strongly recommended for low-throughput chains to upgrade to this release and leave this value unset. If you were observing the op-node to stall right before it was increasing the safe head, that was caused by long durations between safe head increases leading to a cold cache, which caused all L1 data to be fetched again.
OTOH if you run a node for a high throughput chain, you may want to explicitly reduce the cache, e.g. to the old value of 1000. Otherwise you may observe an increase of op-node memory consumption. 2GB used to be enough with an L1 cache of size 1000, it may need to be increased to 4-8 GB with the new default of 2400 (2/3 seq window = 2/3 * 60 * 12 * 5).
op-node changelog
- fix: First Time JWT Generation Bug by @axelKingsley in #13431
- [op-node/withdrawals] Cleanup, and support multiple withdrawal events in a single receipt by @mdehoog in #13568
- op-node/rollup/derive: add info logging by @sebastianst in #13753
- op-node,op-service: Make L1 cache size configurable by @sebastianst in #13772
- go: update SCR dependency to add Soneium Mainnet chain config by @sebastianst in #13784
Full Changelog: op-node/v1.10.2...op-node/v1.10.3
🚢 Docker Image https://us-docker.pkg.dev/oplabs-tools-artifacts/images/op-node:v1.10.3