Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@littlejawa
Copy link
Contributor

@littlejawa littlejawa commented Apr 12, 2024

What type of PR is this?

/kind feature

What this PR does / why we need it:

With #7471 we introduced a way to support Confidential Container's feature of pulling the image in the guest VM.
When this happens, cri-o is still pulling the image on the host, because kubernetes keeps sending the "PullImageRequest", and cri-o has no reason to not process it.

This has two drawbacks:

  1. it is a waste of time and resources, as the image that crio will pull and prepare will not be used anyway
  2. it can block the actual execution of the container in the Confidential Container use case: if the image is encrypted, and the key is not shared with crio, then crio will fail to prepare the image because it can't read it.
    This failure will block the container creation, as kubernetes will not proceed with CreateContainer if PullImage failed.

This PR is meant to make crio skip the pull image phase when the runtime is configured with the "runtime_pull_image" flag introduced in #7471

Which issue(s) this PR fixes:

Fixes: #8261

Special notes for your reviewer:

I have modified the code in crio to skip the pull image processing when the runtime is configured with the "runtime_pull_image" flag.
Crio then just reports a success status to kubernetes, which is happy with that, even when subsequent ImageList or ImageStatus report the image is missing.

Now without an image in the local store, cri-o will fail to create the container. This required further changes in create_sandbox_container and runtimeService, to get information from the remote repository, without pulling the image.

I kept those changes in separate commits for easier review, but I suspect it would make sense to squash them, and get a single, functional commit.

Does this PR introduce a user-facing change?

Allow the runtime to be configured to skip the pull image phase, and make sure that cri-o can still run the container with it. This allows the use of the Confidential Containers runtime, that pulls the container image separately.

@littlejawa littlejawa requested a review from mrunalp as a code owner April 12, 2024 09:01
@openshift-ci openshift-ci bot added release-note-none Denotes a PR that doesn't merit a release note. kind/feature Categorizes issue or PR as related to a new feature. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Apr 12, 2024
@openshift-ci openshift-ci bot requested review from klihub and wgahnagl April 12, 2024 09:02
@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2024
@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 15, 2024
@littlejawa littlejawa force-pushed the skip_pullimage branch 3 times, most recently from ad5e548 to 6c24d32 Compare June 19, 2024 14:48
@codecov
Copy link

codecov bot commented Jun 19, 2024

Codecov Report

❌ Patch coverage is 33.73016% with 167 lines in your changes missing coverage. Please review.
✅ Project coverage is 46.71%. Comparing base (5fe662d) to head (0975645).
⚠️ Report is 746 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8008      +/-   ##
==========================================
- Coverage   46.96%   46.71%   -0.25%     
==========================================
  Files         150      150              
  Lines       21873    22056     +183     
==========================================
+ Hits        10272    10303      +31     
- Misses      10538    10683     +145     
- Partials     1063     1070       +7     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kwilczynski
Copy link
Contributor

/retest

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 9, 2024
@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 11, 2024
@kwilczynski kwilczynski added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 12, 2024
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 26, 2024
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 26, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: littlejawa
Once this PR has been reviewed and has the lgtm label, please assign saschagrunert for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 19, 2024
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 12, 2024
someNameOfTheImage, imageID, someRepoDigest, imageAnnotations, err := s.getInfoFromImage(userRequestedImage)
if err != nil {
return nil, err
if isRuntimePullImage {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we using the artifact functions here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah @saschagrunert suggested this. it feels weird to me to use the artifact functions here, why did we go this route?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the image is not there, we need a way to access the remote repo's metadata to get all needed info, without pulling the full image.
In a way or another, I need to access the data without pulling the layers. The containers/image API did not allow me to do that (or I missed it?) so Sascha suggested this alternative, which allows me to get just the manifest and config that I need.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I think the approach is good but maybe naming the functions differently so it's clearer that's what we're doing here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we can split the existing bits if required. 👍

@haircommander
Copy link
Member

@mtrmac would you be able to PTAL

Copy link
Collaborator

@mtrmac mtrmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m afraid I don’t understand the full problem space.

Still, this seems closer to a naive prototype, with a lot of divergence from the current CRI-O behavior.

  • Doing the pull outside of PullImage means Pod’s image pull secrets will not be available. I think that’s simply not viable, especially for the security-paranoid customers this feature might attract. So if the VM API requires the pull to happen only later in CreateContainer, it seems to me that the VM API needs to be redesigned now, before this could gain (any more) users.
  • Users, sadly (and the more “enterprise” they are, the worse), rely on short names, and tags. These things must be resolved once into a stable image identifier (what PullImage returns as an ImageRef). This PR does not implement the (regrettable) unqualified search at all, and it is adding code paths with tag-change races all over the place.
  • I have no idea what it means to have StorageImageID values that no longer refer to local storage, with nothing added to support that. I suspect that things break all over the codebase (for starters, the IsRunningImageAllowed call will simply fail — and that code path is used by default on all RHEL and OpenShift systems).
    • E.g. what happens with ImageStatus (when Kubelet asks, checking whether it needs to pull at all)? With ListImages and Kubelet-driven GC?
    • Overall I find this PR very unlikely to be correct. If this PR were a complete reimplementation of ImageServer for the VM backend, I don’t know, my eyes would probably glaze over but I would think that it’s possible that the rest of CRI-O can work with no changes. Just injecting StorageImageID values that don’t refer to anything… I can’t believe that works.

name = reference.TagNameOnly(name) // make sure to add ":latest" if needed

ref, err := o.impl.NewReference(name)
src, err := o.getSourceFromImageName(ctx, img, opts)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(pre-existing?) src must be closed, otherwise this leaves around keep-alive HTTP connections.

opts = &PullOptions{}
}

src, err := o.getSourceFromImageName(ctx, img, opts)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creating an ImageSource is fairly costly (HTTP round-trips); it should only be done once, not from scratch for every operation. Same for GetConfig.


Overall, the way this file has been refactored seems mostly misguided to me; it introduces abstraction boundaries in wrong places.

}

unparsedInstance := unparsedToplevel
if manifest.MIMETypeIsMultiImage(topMIMEType) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is pretty surprising for GetConfig to do its own manifest resolution. Why isn’t that centralized?

This is both costly (fetching manifests twice or more), and risks that the code will diverge (… as it seems to already have done).

}

func (o *OCIArtifact) getParsedManifest(ctx context.Context, src types.ImageSource, opts *PullOptions) (manifest.Manifest, error) {
manifestBytes, mimeType, err := o.impl.GetManifest(ctx, src, nil)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetManifest means the caller is responsible for validating digest references; this one isn’t doing so.

Use UnparsedInstance instead; that can also handle caching to avoid repeated requests about the same image.


// SpecAddAnnotations adds annotations to the spec.
SpecAddAnnotations(ctx context.Context, sb SandboxIFace, containerVolume []oci.ContainerVolume, mountPoint, configStopSignal string, imageResult *storage.ImageResult, isSystemd bool, seccompRef, platformRuntimePath string) error
SpecAddAnnotations(ctx context.Context, sb SandboxIFace, containerVolume []oci.ContainerVolume, mountPoint, configStopSignal string, imageName *references.RegistryImageReference, imageID *storage.StorageImageID, isSystemd bool, seccompRef, platformRuntimePath string) error
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have spent a lot of effort (and attracted a lot of disagreement and controversy) to explicitly document the weird semantics of these values, by using names like SomeNameOfThisImage, as well as accompanying comments.

So, naturally, I want all those caveats to be preserved and the comments to be duplicated if this is going to stop referencing the existing data structures.

if cfg == nil || cfg.Metadata == nil {
return "", nil
}
name := strings.Join([]string{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to avoid a FOURTH copy of the same hard-coded convention?

rhName, rh := s.GetRuntimeHandlerForPod(ctx, req.SandboxConfig)
if rh != nil {
if rh.RuntimePullImage {
// don't pull the image in crio, as it will be done by the runtime
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the image does not exist at all?

What happens if the network is flaky? This effectively blinds Kubelet’s visibility into pulls, and breaks its retry logic.

The credentials provided by Kubelet are discarded.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, Kubelet can gather credentials for several sources and will loop over them in PullImage until it finds credentials accepted by the registry.

I don’t particularly like that behavior but that’s one more reason for PullImage to be supported properly, whatever that means (a RPC to the VM?).

// don't pull the image in crio, as it will be done by the runtime
log.Debugf(ctx, "Skip image pull for runtime %s - image %s", rhName, image)
return &types.PullImageResponse{
ImageRef: req.Image.Image,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nowadays, the returned ImageRef is always a digested reference. If this code path starts putting less structured user-controlled data into the field, I don’t immediately know what that means. E.g. does ImageStatus accept these values?

// don't pull the image in crio, as it will be done by the runtime
log.Debugf(ctx, "Skip image pull for runtime %s - image %s", rhName, image)
return &types.PullImageResponse{
ImageRef: req.Image.Image,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least per lines just above, req.Image can be nil. And the code has already dealt with that, and the line above uses the result; i.e. the debug log and the actually-returned value are not consistent.

return parsedManifest, nil
}

func (o *OCIArtifact) GetConfig(ctx context.Context, img string, opts *PullOptions) (*v1.Image, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, artifacts don’t have image configs. If they have image configs, they are images. (I can see a semantic argument that “image is an artifact”, but… what are we doing here?)

@mtrmac
Copy link
Collaborator

mtrmac commented Jan 31, 2025

Still, this seems closer to a naive prototype

I realize that, at least with #7471, many parts of that are already true:

  • There is a race between CRI-O’s and the VM’s pulls of tagged images, and short-name resolution
  • The VM is pulling images in CreateContainer, and without access to the Pod’s pull secret and with no Kubelet visibility
  • CRI-O has ~no visibility into VM’s use of the images

But, well, it’s been over a year since that PR. If it is not going to be handled now, is it ever going to be handled?

@littlejawa
Copy link
Contributor Author

Hey @mtrmac ,
Thank you for your review.

Let me try to answer some of your questions:

There is a race between CRI-O’s and the VM’s pulls of tagged images, and short-name resolution

Name resolution needs to happen on the VM side, where the pull occurs.
I have to double-check if/how we do it, but we definitely need to take care of short-names.
Thanks for pointing that out.

The VM is pulling images in CreateContainer, and without access to the Pod’s pull secret and with no Kubelet visibility

That is actually done on purpose.

The problem we're trying to address with Confidential Containers is when customers want to run a sensitive workload on a cluster they don't trust.
Typically: using public clouds, to benefit from specific hardware without compromising security.
In this approach we can't take any secret or image coming from the host, because by definition they could be compromised.
And we have to prevent the host from accessing the content of the VM in any way.

The current approach in CoCo is to provision the image pull secret inside the VM before image pull. This secret along with any other configs like the container policy, signature verification keys, decryption keys are provisioned after verification of the VM environment. Post that the image pull is executed.
In other words: nothing is taken from the host, and nothing gets pulled within the VM before we have verified that we're running in a trusted environment.

CRI-O has ~no visibility into VM’s use of the images

My feeling is that, from CRI-O's perspective, there is just a container being created.
If the creation fails (because a pull error occurred within the VM for instance), the runtime will report it to CRI-O so that the user can see it and address it.
For everything else, we actually want to keep CRI-O un-aware of what happens within the VM.

I have no idea what it means to have StorageImageID values that no longer refer to local storage, with nothing added to support that.

Me neither :)
I don't know enough of the internals of cri-o to understand all the implications of not pulling the image.
That's why I asked for help since the beginning (see #8261).

Some questions from me about that (also to @haircommander):
I wanted to make CRI-O ignore the image, but the StorageID (and others things) are required all through the code base, so it seems like I actually need to make something up. Now you're telling me that's wrong, and I don't disagree.
Is there a way to have CRI-O ignore all these?
Rather than building a fake ID, can we return and use something that clearly state we're not dealing with a local image?
One of the things I tried in early versions was to trick CRI-O into using the pause image's storage as the local reference - not as a solution, more like a PoC. Should we have something similar, like an "empty image", that we could reference?

@mtrmac and @saschagrunert mentioned the ImageServer. I can make one used exclusively for the Confidential use case, but wouldn't it still need to output some references to (fake) storage IDs? I didn't go that route because it didn't seem to fix my problem, but I'm willing to go that way if it helps.

Again, any suggestion is welcome.

@mtrmac
Copy link
Collaborator

mtrmac commented Feb 5, 2025

There is a race between CRI-O’s and the VM’s pulls of tagged images, and short-name resolution

Name resolution needs to happen on the VM side, where the pull occurs.

See a parallel conversation; I think CRI-O and the VM must structurally be setup to so that they agree on the identity of the image which is used.

The VM is pulling images in CreateContainer, and without access to the Pod’s pull secret and with no Kubelet visibility

That is actually done on purpose.

The problem we're trying to address with Confidential Containers is when customers want to run a sensitive workload on a cluster they don't trust. Typically: using public clouds, to benefit from specific hardware without compromising security. In this approach we can't take any secret or image coming from the host, because by definition they could be compromised. And we have to prevent the host from accessing the content of the VM in any way.

I have only a very vague idea of how confidential containers work, so this might be completely wrong: I thought confidential containers are confidential by encrypting the contents of the image; in this threat model, the encrypted contents of the image are ~public enough (e.g. it must be possible to copy the container image to the cloud where it is to be run). Is it really the case that the registry credentials must be invisible to CRI-O = that the registry is a trusted component in the confidential container design?

The current approach in CoCo is to provision the image pull secret inside the VM before image pull. … signature verification keys …are provisioned after verification of the VM environment.

And all of that happens invisible to CRI-O? That… complicates maintenance a lot. How is feature parity going to be maintained over time if there is a parallel implementation invisible to the CRI-O codebase (and drive-by contributors ~like me not being aware of it at all)?

CRI-O has ~no visibility into VM’s use of the images

My feeling is that, from CRI-O's perspective, there is just a container being created.

Kubelet is still going to report on node’s disk usage, and it is still going to run its image garbage collection loop, isn’t it? And possibly try to evict / make scheduling decisions based on node’s disk usage. AFAICS right now that is all based on the filesystem hosting the node’s c/storage store, with no idea that the VMs might be elsewhere, possibly unaware that the node’s filesystem is critically overloaded.

Some questions from me about that (also to @haircommander): I wanted to make CRI-O ignore the image, but the StorageID (and others things) are required all through the code base

As a first guess, I think all of that code ~assumes that images exist locally in c/storage. If that is not going to be the case, that code needs to either be, function by function, modified to handle in-VM images, or analyzed to make an argument that CRI-O does not actually need to do anything because $reason. Hypothetically it might be the case that everything is in the latter category and we can just fake values, but I think that’s very unlikely.

I’d expect at least ImageServer, or some modified version of it (one that does not expose the raw .GetStore(), and instead provides more high-level operations) would probably need to have a separate implementation for in-VM images. (And that would show up in the after-CRI-O restart container reload code, in snapshot/restore, in the disk usage management.)

IOW, if the design is that we have to very different storage backends, I’d expect that to be directly reflected / modeled in a CRI-O codebase. It can’t be one or two ifs and the rest of the codebase being completely unaware.

And if the same node could contain both c/storage images and in-VM images simultaneously… that would also need to be explicitly modeled.

Some good news is that StorageImageID is not an external API; Kubelet uses the provided values, but IIRC it just uses it as an opaque string. So it should be possible to virtualize it / to model the two storage mechanisms, even if the VM storage had a rather different image reference mechanism. But, also, … the bad news that some structural assumptions (images can be pulled and then run; images can be listed; images don’t need to be pulled if they already exist on the node) are hard-coded in the shape of the CRI.

@mtrmac
Copy link
Collaborator

mtrmac commented Feb 5, 2025

Kubelet uses the provided values

Also, some of the APIs / code paths are ambiguous WRT whether they accept registry references or storage IDs. In some cases Kubelet does not actually care, to the extent that it just obtains the value from one CRI API and provides it to another ({PullImage,ImageStatus} -> CreateContainer) … but other users like security software really want to see repo@digest values.

@haircommander
Copy link
Member

I’d expect at least ImageServer, or some modified version of it (one that does not expose the raw .GetStore(), and instead provides more high-level operations) would probably need to have a separate implementation for in-VM images. (And that would show up in the after-CRI-O restart container reload code, in snapshot/restore, in the disk usage management.)

yeah I agree I think any place we use storageID we'll probably want to evaluate how to delegate the understanding of that image to the VM.

is it possible to have the vm tell CRI-O what the storage ID is? then, cri-o can populate that everywhere. IT also could use the Pinned CRI field to make sure the kubelet doesn't consider it for garbage collection, while still reporting it so securityy scanners and the like can identify them. or is that too much information passed up to cri-o @littlejawa

@littlejawa
Copy link
Contributor Author

is it possible to have the vm tell CRI-O what the storage ID is?

@haircommander - We don't have a way in our current API to report that information.
But in any case, I'm afraid we would have a chicken-and-egg problem: the Storage ID is required in CRI-O during the CreateContainer phase before calling the kata runtime, but the image is pulled in the VM when it receives the CreateContainer request from the runtime. At the time we need it, the StorageID doesn't exist, even in the VM.

Now reading mtrmak comment

StorageImageID is inherently a reference to local c/storage.
If this is not going to use any image in c/storage… why is it using the same data type?
Shouldn’t the difference in image locations, or whatever this is, be explicitly modeled
throughout the codebase, as separate fields with separate types, or as a properly
abstracted interface with clearly documented properties (and non-properties)?

I've tried to copy the same data type, assuming this was a hard requirement. But maybe that's just wrong?
It brings a lot of assumption about its content that are just not true with our model.

If we make an abstraction at the ImageServer level, can we then manage another kind of ID that would be VM-specific?
I don't think we would need to get it from the VM itself, but I'll need a better understanding of what Storage ID actually refers to.

If the goal is to have a 1:1 relationship, making sure an ID refers to a uniquely identifiable image, maybe we can find a way to build this ID based on what CRI-O know about the pod, the runtime process that manages the VM, etc...
Something that clearly points to a single VM and a single image in it - knowing that the VM deals with a single image per container anyway, because every time the image is changed, the pod is restarted, and the VM with it.
If the ID doesn't provide data about the way the image is stored within the VM, but allows to point to that image in a unique way, is that enough?

@mtrmac
Copy link
Collaborator

mtrmac commented Feb 10, 2025

I think this needs to start with “what are the immutable parts”. Then we figure out what the behavior should be. And last what the internal structure of the code to implement that behavior should be. (In principle, this is Open Source, and everything can be changed.). But, let’s say that the CRI, and most of Kubelet’s behavior are immutable.

IIRC, Kubelet, primarily:

  • Issues pulls: ImageStatus(userInput) = “not found”; PullImage(userInput) = imageRef; createContainer(imageRef)
  • Avoids pulls: ImageStatus(userInput) = imageID; createContainer(imageID)
  • Has a garbage-collection controller (ListImages, some way to determine what is used, DeleteImage) — I only vaguely remember this part.
  • Reports some things in Node / Pod status objects.

That needs to be carefully re-analyzed, but let’s start with that as a hypothesis.

Now, Kubelet has fairly strict expectations on the format of userInput; for imageID, IIRC it doesn’t care ~at all — but the values show up in K8s resources and are user-visible, so it’s possible that various users’ custom scripts expect things.

So, what should the above map to with Confidential Containers?


I've tried to copy the same data type, assuming this was a hard requirement. But maybe that's just wrong?

StorageImageID exists (as a specific strict type) because everything around is strings, and it was confusing (and frequently confused) what was the acceptable syntax for various string fields is in various pieces of the codebase.

It is CRI-O’s ~choice to use the c/storage.Image.ID value as imageID in the CRI API; it plausibly could do something else (but, see above about user-visible values and custom scripts). If Confidential Containers don’t have c/storage, they obviously don’t have c/storage.Image.ID. Still, it might turn out to be necessary to produce exactly the same values (typically = config digest without sha256:)… I don’t know, that’s a K8s question.

My (fairly strong) preference is that the CRI-O codebase should retain the strong type separation between various string values. So there should be some “image ID” type, distinct from “user input” or “repo@digest reference” and others. That might be a single type with a single syntax, or maybe a holder wrapping a interface, or something else, I don’t currently have an opinion — that follows from choosing and designing behavior.

@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2025
@littlejawa
Copy link
Contributor Author

For the records - I'm still looking into that, but have been pre-empted by other tasks.
I'm trying to figure out the right answer for mtrmac's questions, but there is no obvious mapping, as we don't have an API for ImageStatus/ListImage/PullImage.

Now we have an implementation of Confidential Containers on the containerd side that works with encrypted containers.
So somehow, we've managed to make containerd happy with not pulling the image, and still return valid information to Kubernetes.
I want to dig into that implementation and figure out what IDs/structure is returned by Confidential Containers in the context of containerd, it may help find the right answer for CRI-O.
I'm not familiar with the code though, so I need to find time to do that investigation.

@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2025
@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 13, 2025
@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 16, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 18, 2025

@littlejawa: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/ci-rhel-critest 0975645 link true /test ci-rhel-critest
ci/prow/ci-rhel-e2e 0975645 link true /test ci-rhel-e2e
ci/prow/ci-fedora-kata 0975645 link true /test ci-fedora-kata
ci/prow/ci-cgroupv2-e2e 0975645 link true /test ci-cgroupv2-e2e
ci/prow/ci-cgroupv2-integration 0975645 link true /test ci-cgroupv2-integration
ci/prow/ci-crun-e2e 0975645 link true /test ci-crun-e2e
ci/prow/ci-e2e 0975645 link true /test ci-e2e
ci/prow/ci-fedora-critest 0975645 link true /test ci-fedora-critest
ci/prow/ci-cgroupv2-e2e-features 0975645 link true /test ci-cgroupv2-e2e-features
ci/prow/ci-cgroupv2-e2e-crun 0975645 link true /test ci-cgroupv2-e2e-crun
ci/prow/ci-e2e-conmonrs 0975645 link true /test ci-e2e-conmonrs
ci/prow/ci-fedora-integration 0975645 link true /test ci-fedora-integration
ci/prow/e2e-aws-ovn 0975645 link true /test e2e-aws-ovn
ci/prow/e2e-gcp-ovn 0975645 link true /test e2e-gcp-ovn

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 19, 2025
@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 18, 2025
@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 14, 2025
@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 13, 2025
@github-actions
Copy link

A friendly reminder that this PR had no activity for 30 days.

@github-actions github-actions bot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dco-signoff: yes Indicates the PR's author has DCO signed all their commits. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note Denotes a PR that will be considered when it comes time to generate release notes.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

RFC: how to support the use of encrypted containers in Confidential Containers

7 participants