We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, thank you for sharing this nice tool! I've needed to adapt it to cgroups v2 for my slurm cluster:
cpu
memory
/sys/fs/cgroup/{cpu,memory}
cpuacct.usage
usage_usec
cpu.stat
/sys/fs/cgroup/system.slice/slurmstepd.scope/job_%v
/sys/fs/cgroup/.../slurm/uid_%v/job_%v
It is a small patch to cmd/jobperf/nodestats.go. I'll propose a MR shortly.
cmd/jobperf/nodestats.go
The text was updated successfully, but these errors were encountered:
Thanks for looking into this! An MR would be appreciated -- there's a good chance we'll be moving to cgroupv2 in the not too distant future.
Sorry, something went wrong.
Fix clemsonciti#3 cgroupv2 support
91176aa
detect v1/v2 presence at each call to nodestats (via a single file stat). Parse the cpu.stat file and memory.current, memory.peak for cgroupv2
Successfully merging a pull request may close this issue.
Hi,
thank you for sharing this nice tool!
I've needed to adapt it to cgroups v2 for my slurm cluster:
cpu
ormemory
(/sys/fs/cgroup/{cpu,memory}
) at the root of the pathcpuacct.usage
, but we can use theusage_usec
field in thecpu.stat
file/sys/fs/cgroup/system.slice/slurmstepd.scope/job_%v
instead of/sys/fs/cgroup/.../slurm/uid_%v/job_%v
It is a small patch to
cmd/jobperf/nodestats.go
. I'll propose a MR shortly.The text was updated successfully, but these errors were encountered: