Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

kolyshkin
Copy link
Contributor

@kolyshkin kolyshkin commented Jul 16, 2025

Requires (and currently includes) PR #4822; draft until that one is merged.

It makes sense to make runc exec benefit from clone2(CLONE_INTO_CGROUP), when
available. Since it requires a recent kernel and might not work, implement a fallback.

Based on work done in

Closes: #4782.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from aa873c8 to 115aa1f Compare July 16, 2025 02:27
@kolyshkin

This comment was marked as resolved.

@cyphar

This comment was marked as resolved.

@kolyshkin

This comment was marked as resolved.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 4 times, most recently from 467d16c to dfcf22a Compare July 16, 2025 07:47
@lifubang
Copy link
Member

I need some time to digest this. Any feedback is welcome.

I notice that all the failures occurred in rootless container tests. This might be related to:

// On cgroup v2 + nesting + domain controllers, WriteCgroupProc may fail with EBUSY.

However, you mentioned we're seeing an ENOENT error here, so that may not be the cause.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from dfcf22a to 6095b61 Compare July 16, 2025 22:46
@cyphar
Copy link
Member

cyphar commented Jul 17, 2025

@kolyshkin Wait, I thought we always communicated with systemd when using cgroup2 -- systemd is very happy to mess with our cgroups (including clearing limits and various other quite dangerous behaviour) if we don't tell it that we are managing the cgroup with Delegate=yes. Maybe this has changed over the years, but I'm fairly certain the initial implementations of this stuff all communicated something with systemd regardless of the cgroup driver used.

Is this just for our testing, or are users actually using this? Because we will need to fix that if we have users on systemd-based systems using cgroups directly without transient units...

@kolyshkin
Copy link
Contributor Author

@kolyshkin Wait, I thought we always communicated with systemd when using cgroup2 -- systemd is very happy to mess with our cgroups (including clearing limits and various other quite dangerous behaviour) if we don't tell it that we are managing the cgroup with Delegate=yes. Maybe this has changed over the years, but I'm fairly certain the initial implementations of this stuff all communicated something with systemd regardless of the cgroup driver used.

Is this just for our testing, or are users actually using this? Because we will need to fix that if we have users on systemd-based systems using cgroups directly without transient units...

When you use runc directly, unless --systemd-cgroup is explicitly specified, the fs/fs2 driver is used and runc do not communicate with systemd in any way. Which might be just fine, if the systemd is configured to not touch a specific cgroup path and everything under it, and runc is creating cgroups under that path. Having said that, runc with fs/fs2 driver neither configures such thing, nor checks if it is configured.

I'm pretty sure it has been that way from the very beginning.

One other thing is, when using systemd, we configure everything via systemd and then use fs/fs2 driver to write to cgroup directly. This is also how things have always been. One reason for that is we did not care much to translate OCI spec into systemd settings, which is now mostly fixed. Another reason is, systemd doesn't support all per-cgroup settings that the kernel has (so some of those can't be expressed as systemd unit properties).

@kolyshkin
Copy link
Contributor Author

I need some time to digest this. Any feedback is welcome.

I notice that all the failures occurred in rootless container tests. This might be related to:

// On cgroup v2 + nesting + domain controllers, WriteCgroupProc may fail with EBUSY.

However, you mentioned we're seeing an ENOENT error here, so that may not be the cause.

The thing is, while the comment says "EBUSY", the actual code doesn't check for particular error, going for this fallback on any error (including ENOENT).

My guess is, with systemd driver we actually need AddPid cgroup driver method to add a pid (like an exec pid) into a pre-created cgroup (as opposed to Apply which creates the cgroup). I'm working on adding it.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 6 times, most recently from 6e3bf36 to 9489925 Compare July 29, 2025 01:04
@kolyshkin
Copy link
Contributor Author

Apparently, we are also not placing rootless container exec's into the proper cgroup (which is still possible when using cgroup v2 systemd driver, but we'd need to use AttachProcessesToUnit). As a result, container init and exec are running in different cgroups. This could be a problem because rootless+cgroupv2+systemd-driver can still set resource limits, and exec is running without those.

Tacking it in #4822

@cyphar
Copy link
Member

cyphar commented Aug 15, 2025

I'm aware of the mixed fs/systemd setup, my confusion was more that systemd very strictly expects to be told about stuff in cgroupv2 because cgroupv2 was designed around a global management process -- it has been a long time since I reviewed the first cgroupv2 patches for runc, but I remember that being a thing I was worried about at the time.

The exec issue seems bad too...

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from 9489925 to f96e179 Compare September 8, 2025 19:45
@kolyshkin kolyshkin added this to the 1.4.0-rc.2 milestone Sep 16, 2025
@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 4 times, most recently from b56a52b to 1652210 Compare September 17, 2025 00:01
@kolyshkin
Copy link
Contributor Author

OK I did some debugging and have very bad news to share.

Apparently GHA moves the process we create (container's init) to a different cgroup. Here's an excerpt from debug logs (using fs2 cgroup driver):

runc run -d --console-socket /tmp/bats-run-X8QSrN/runc.7IFV0b/tty/sock test_busybox (status=0)
time="2025-07-16T02:31:13Z" level=info msg="XXX container init cgroup /sys/fs/cgroup/system.slice/test_busybox"

Here ^^^ runc created a container and put its init to /system.slice/test_busybox cgroup.

runc exec test_busybox stat /tmp/mount-1/foo.txt /tmp/mount-2/foo.txt (status=255)
XXX container test_busybox init cgroup: /system.slice/hosted-compute-agent.service (present)

Here ^^^ the same container init is unexpectedly in the /system.slice/hosted-compute-agent.service cgroup.

time="2025-07-16T02:31:13Z" level=error msg="exec failed: unable to start container process: can't open cgroup: open /sys/fs/cgroup/system.slice/test_busybox: no such file or directory"

And here ^^^ runc exec failed because container's cgroup no longer exists.

Maybe this is what systemd does? But it doesn't do that on my machine.

I need some time to digest this. Any feedback is welcome.

Guess what, this is no longer happening. Based on cgroup name (hosted-compute-agent.service) I suspect it was caused by a bug in Azure (or, more specifically, GHA CI) infrastructure software.

Copilot

This comment was marked as spam.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from 58ed84c to 99977d2 Compare September 18, 2025 02:01
@kolyshkin kolyshkin requested a review from rata September 18, 2025 02:02
Copy link
Member

@rata rata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure if retrying only when specific errors are returned is enough to not screw it up in older kernels.

Also, this will mean in old kernels we will always fail the first time calling clone. I guess it's fine?

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from 99977d2 to 1b6d405 Compare September 18, 2025 18:17
This is based on work done in [1].

Since the functionality requires a recent kernel and might not work,
implement a fallback.

[1]: https://go-review.googlesource.com/c/go/+/417695
Signed-off-by: Kir Kolyshkin <[email protected]>
Copy link
Member

@rata rata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! One nit about a debug line, but worst case we can add it later.

// Rootless has no direct access to cgroup.
return true
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fear we might nor retry in a case we should and then the container will never start for the user and be a big regression. Can we log as debug what was the error before returning false?

So in case that happens, we can ask them to run with debug and check the error they get and we just handle that case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants