Ask HN: Scheduling stateful nodes when MMAP makes memory accounting a lie
We’re hitting a classic distributed systems wall and I’m looking for war stories or "least worst" practices.
The Context: We maintain a distributed stateful engine (think search/analytics). The architecture is standard: a Control Plane (Coordinator) assigns data segments to Worker Nodes. The workload involves heavy use of mmap and lazy loading for large datasets.
The Incident: We had a cascading failure where the Coordinator got stuck in a loop, DDOS-ing a specific node.
The Signal: Coordinator sees Node A has significantly fewer rows (logical count) than the cluster average. It flags Node A as "underutilized."
The Action: Coordinator attempts to rebalance/load new segments onto Node A.
The Reality: Node A is actually sitting at 197GB RAM usage (near OOM). The data on it happens to be extremely wide (fat rows, huge blobs), so its logical row count is low, but physical footprint is massive.
The Loop: Node A rejects the load (or times out). The Coordinator ignores the backpressure, sees the low row count again, and retries immediately.
The Core Problem: We are trying to write a "God Equation" for our load balancer. We started with row_count, which failed. We looked at disk usage, but that doesn't correlate with RAM because of lazy loading.
Now we are staring at mmap. Because the OS manages the page cache, the application-level RSS is noisy and doesn't strictly reflect "required" memory vs "reclaimable" cache.
The Question: Attempting to enumerate every resource variable (CPU, IOPS, RSS, Disk, logical count) into a single scoring function feels like an NP-hard trap.
How do you handle placement in systems where memory usage is opaque/dynamic?
Dumb Coordinator, Smart Nodes: Should we just let the Coordinator blind-fire based on disk space, and rely 100% on the Node to return hard 429 Too Many Requests based on local pressure?
Cost Estimation: Do we try to build a synthetic "cost model" per segment (e.g., predicted memory footprint) and schedule based on credits, ignoring actual OS metrics?
Control Plane Decoupling: Separate storage balancing (disk) from query balancing (mem)?
Feels like we are reinventing the wheel. References to papers or similar architecture post-mortems appreciated.
I can suggest measuring working set size (WSS) instead of RSS. See https://docs.kernel.org/admin-guide/mm/multigen_lru.html and https://docs.kernel.org/mm/damon/index.html
There's a simple solution: don't use mmap(). There's a reason that databases use O_DIRECT to read into their own in memory cache. If it was Good Enough for Oracle in the 1990s, it's probably Good Enough for you.
mmap() is one of those things that looks like it's an easy solution when you start writing an application, but that's only because you don't know the complexity time bomb of what you're undertaking.
The entire point of the various ways of performing asynchronous disk I/O using APIs like io_uring is to manage when and where blocking of tasks for I/O occurs. When you know where blocking I/O gets done, you can make it part of your main event loop.
If you don't know when or where blocking occurs (be it on I/O or mutexes or other such things), you're forced to make up for it by increasing the size of your thread pool. But larger thread pools come with a penalty: task switches are expensive! Scheduling is expensive! AVX 512 registers alone are 2KB of state per task, and if a thread hasn't run for a while, you're probably missing on your L1 and L2 caches. That's pure overhead baked into the thread pool architecture that you can entirely avoid by using an event driven architecture.
All the high performance systems I've worked on use event driven architectures -- from various network protocol implementations (protocols like BGP on JunOS, the HA functionality) to high speed (persistent and non-persistent) messaging (at Solace). It just makes everything easier when you're able to keep threads hot on locked to a single core. Bonus: when the system is at maximum load, you remain at pretty much the same number of requests per second rather than degrading as the number of threads ready to run starts increasing and wasting your CPU resources needlessly when you need them most.
It's hard to believe that the event queue architecture I first encountered on an Amiga in the late 1980s when I was just a kid is still worth knowing today.
Relevant: https://db.cs.cmu.edu/papers/2022/cidr2022-p13-crotty.pdf
There is a database that uses `mmap()` - RavenDB. Their memory accounting is utter horror - they somehow use Commited_AS from /proc/meminfo in their calculations. Their recommendation to avoid OOMs is to have swap twice the size of RAM. Their Jepsen test results are pure comedy.
> Coordinator sees Node A has significantly fewer rows (logical count) than the cluster average. It flags Node A as "underutilized."
Ok, so you are dealing with a classic - you measure A, but what matters is B. For "load" balancing a decent metric is, well, response time (and jitter).
For data partitioning - I guess number of rows is not the right metric? Change it to number*avg_size or something?
If you can't measure the thing directly, then take a look at stuff like "PID controller". This can be approach as a typical controller loop problem, although in 99% doing PID for software systems is an overkill.
This actually seems like a simple example of memory request vs limit.
Request the amount of memory needed to be healthy, you can potentially set the limit higher to account for "reclaimable cache".
Another way to approach it if you find that there are too many limiting metrics to accurately model things: is you let the workers grab more segments until you determine that they are overloaded. Ideally for this to work though you have some idea that the node is approaching saturation. So for example: keep adding segments as long as the nth percentile response time is under some threshold.
The advantage of this approach is you don't necessarily have to know which resource (memory, filehandles, etc) is at capacity. You don't even necessarily have to have deep knowledge of linux memory management. You just have to be able to probe the system to determine if it's healthy.
I can even go backwards with a binary split mechanism. You sort of bring up a node that owns [A-H] (8 segments in this case). If that fails bring up 2 nodes that own [A-D],[E-H], if that fails, all the way down to one segment per node.
mmap'ed memory counts as that "reclaimable cache", which isn't always reclaimable (dirty or active pages are not immediately reclaimable). But Kubernetes memory accounting assumes that the page cache is always reclaimable. This creates a lot of surprises and unexpected OOMs. https://github.com/kubernetes/kubernetes/issues/43916
It's not clear whether you're using Kubernetes, but the Kubernetes way of dealing with this problem is to declare a memory reservation (i.e., a request) along with the container specification. The amount of the reservation will be deducted from the host's available memory for scheduling purposes, regardless of whether the container actually consumes the reserved amount. It's also a best practice to configure the memory limit to be identical to the reservation, so if the container exceeds the reserved amount, the kernel will terminate it via the OOM killer.
Of course, for this to work, you have to figure out what that reserved amount should be. That is an exercise for the implementer (i.e., you).
See https://kubernetes.io/docs/concepts/configuration/manage-res...
> Attempting to enumerate every resource variable (CPU, IOPS, RSS, Disk, logical count) into a single scoring function feels like an NP-hard trap.
Yeah, don't do that. Figure out what resources your applications need and the declare them, and let the scheduler find the best node based on the requirements you've specified.
> We are trying to write a "God Equation" for our load balancer. We started with row_count, which failed. We looked at disk usage, but that doesn't correlate with RAM because of lazy loading.
A few things come to mind...
First, you're talking about a load balancer, but it's not clear that you're trying to balance load! A good metric to use for load balancing is one whose value is proportional to response latency.
It smells like you're trying to provision resources based on an optimistic prediction of your working set size. Perhaps you need a more pessimistic prediction. It might also be that you're relying too heavily on the kernel to handle paging, when what you really need is a cache tuned for your application that is scan-resistant, coupled with O_DIRECT for I/O.
Memory pressure (and a lot of other overload conditions) usually makes latency worse--does that show up in your system? Latency backpressure is a pretty conventional thing to do. You're going to want some way to close the loop back to your load balancer, if you're doing open-loop control (sending a "fair share" of traffic to each node and assuming it can handle it) issues like you describe will keep coming up.
This is a Hard Problem and you might be trying to get away with an unrealistically small amount of overprovisioning.
Have you measured Pressure Stall Information or active pages from /proc/meminfo?
Attempting to enumerate every resource variable (CPU, IOPS, RSS, Disk, logical count) into a single scoring function feels like an NP-hard trap.
That's perfect for machine learning.