Skip to content

Commit 60cada2

Browse files
shakeelbakpm00
authored andcommitted
memcg: optimize memcg_rstat_updated
Currently the kernel maintains the stats updates per-memcg which is needed to implement stats flushing threshold. On the update side, the update is added to the per-cpu per-memcg update of the given memcg and all of its ancestors. However when the given memcg has passed the flushing threshold, all of its ancestors should have passed the threshold as well. There is no need to traverse up the memcg tree to maintain the stats updates. Perf profile collected from our fleet shows that memcg_rstat_updated is one of the most expensive memcg function i.e. a lot of cumulative CPU is being spent on it. So, even small micro optimizations matter a lot. This patch is microbenchmarked with multiple instances of netperf on a single machine with locally running netserver and we see couple of percentage of improvement. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Shakeel Butt <[email protected]> Acked-by: Roman Gushchin <[email protected]> Reviewed-by: Yosry Ahmed <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Muchun Song <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 585a914 commit 60cada2

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

mm/memcontrol.c

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -592,18 +592,20 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
592592
cgroup_rstat_updated(memcg->css.cgroup, cpu);
593593
statc = this_cpu_ptr(memcg->vmstats_percpu);
594594
for (; statc; statc = statc->parent) {
595+
/*
596+
* If @memcg is already flushable then all its ancestors are
597+
* flushable as well and also there is no need to increase
598+
* stats_updates.
599+
*/
600+
if (memcg_vmstats_needs_flush(statc->vmstats))
601+
break;
602+
595603
stats_updates = READ_ONCE(statc->stats_updates) + abs(val);
596604
WRITE_ONCE(statc->stats_updates, stats_updates);
597605
if (stats_updates < MEMCG_CHARGE_BATCH)
598606
continue;
599607

600-
/*
601-
* If @memcg is already flush-able, increasing stats_updates is
602-
* redundant. Avoid the overhead of the atomic update.
603-
*/
604-
if (!memcg_vmstats_needs_flush(statc->vmstats))
605-
atomic64_add(stats_updates,
606-
&statc->vmstats->stats_updates);
608+
atomic64_add(stats_updates, &statc->vmstats->stats_updates);
607609
WRITE_ONCE(statc->stats_updates, 0);
608610
}
609611
}

0 commit comments

Comments
 (0)