Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@littlejawa
Copy link
Contributor

What type of PR is this?

/kind bug

What this PR does / why we need it:

Found this while working on kata CI - the job failed the integration tests on :
"ctr pod lifecycle with evented pleg enabled"

Which issue(s) this PR fixes:

None

Special notes for your reviewer:

None

Does this PR introduce a user-facing change?

None

On a STOP action, cri-o was generating a CONTAINER_DELETED_EVENT instead of CONTAINER_STOPPED_EVENT.

Signed-off-by: Julien Ropé <[email protected]>
@littlejawa littlejawa requested a review from mrunalp as a code owner January 13, 2023 08:53
@openshift-ci openshift-ci bot added release-note-none Denotes a PR that doesn't merit a release note. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. kind/bug Categorizes issue or PR as related to a bug. labels Jan 13, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 13, 2023

Hi @littlejawa. Thanks for your PR.

I'm waiting for a cri-o member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 13, 2023
@openshift-ci openshift-ci bot requested review from QiWang19 and wgahnagl January 13, 2023 08:53
@littlejawa
Copy link
Contributor Author

Marking WIP for now - I'd like to confirm the fix with a working CI run.

Also, I'm wondering if there is another one to fix on (

s.generateCRIEvent(ctx, c, types.ContainerEventType_CONTAINER_STOPPED_EVENT)
)
Where "CONTAINER_STOPPED_EVENT" should be "CONTAINER_DELETED_EVENT" ?

@sairameshv - what do you think?

@sairameshv
Copy link
Member

sairameshv commented Jan 13, 2023

Marking WIP for now - I'd like to confirm the fix with a working CI run.

Also, I'm wondering if there is another one to fix on (

s.generateCRIEvent(ctx, c, types.ContainerEventType_CONTAINER_STOPPED_EVENT)

)
Where "CONTAINER_STOPPED_EVENT" should be "CONTAINER_DELETED_EVENT" ?
@sairameshv - what do you think?

Hi @littlejawa , why do you think this event type needs to be changed here?
IMO, the PR makes sense, however, I would request @harche to give his opinion on this change.

@littlejawa
Copy link
Contributor Author

Hi @littlejawa , why do you think this event type needs to be changed here?

I'm really not sure.
It's just that this event is generated before a call to "remove", which sounds more like "DELETE" than "STOP" to me.
Just thought it needed a double check, but I'm fine keeping it if you think it's right.

@haircommander
Copy link
Member

It's just that this event is generated before a call to "remove", which sounds more like "DELETE" than "STOP" to me.

that's removing a temporary file used to indicate the container stopped, I think STOP is appropriate in that case

@haircommander
Copy link
Member

/ok-to-test
/approve

this makes sense to me, thanks!

@openshift-ci openshift-ci bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jan 13, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 13, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: haircommander, littlejawa

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 13, 2023
@littlejawa
Copy link
Contributor Author

It's just that this event is generated before a call to "remove", which sounds more like "DELETE" than "STOP" to me.

that's removing a temporary file used to indicate the container stopped, I think STOP is appropriate in that case

Right. Sorry about that, I've read it too quick.

@harche
Copy link
Contributor

harche commented Jan 13, 2023

/hold Need to see the impact on node e2e of this change.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 13, 2023
@harche
Copy link
Contributor

harche commented Jan 13, 2023

Another thing to note is ContainerEventType_CONTAINER_STOPPED_EVENT interpreted within Kubelet PLEG as ContainerDied which is to say that, the container has run to completion.

@littlejawa
Copy link
Contributor Author

/test kata-containers

@littlejawa littlejawa requested a review from fidencio as a code owner January 17, 2023 08:43
@codecov
Copy link

codecov bot commented Jan 17, 2023

Codecov Report

Merging #6531 (cf4c22d) into main (a410ce6) will decrease coverage by 0.03%.
The diff coverage is 25.00%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6531      +/-   ##
==========================================
- Coverage   42.51%   42.48%   -0.03%     
==========================================
  Files         126      128       +2     
  Lines       14942    14946       +4     
==========================================
- Hits         6352     6350       -2     
- Misses       7900     7906       +6     
  Partials      690      690              

@littlejawa littlejawa force-pushed the fix_event_for_stopped_container branch from 77030ec to 54b1b8b Compare January 17, 2023 10:45
Updating the container's status when it's already removed causes an error.
We can ignore this error safely when we find the container was terminated already.

Signed-off-by: Julien Ropé <[email protected]>
This was referenced Jan 17, 2023
@littlejawa
Copy link
Contributor Author

At this point, the two commits in this PR are bringing the kata-jenkns job back to green.
It doesn't mean the PR needs to be merged as is - we need to agree on the fixes. But I'm probably on the right track :-)

Both commits are fixing issues with the test "ctr pod lifecycle with evented pleg enabled"

  • the first one changes the event generated by the code when the container is stopped (from "DELETED" to "STOPPED"). I made this because the test expects one, and the code generates another. I don't know which one is right, I just picked what made more sense to me, without knowing the actual feature being implemented.
    @harche - I think you're the one who can tell me? If you can't, do you have pointers that I can look at to search the right answer?

  • the second one is specific to runtimeVM implementation: on the DELETED event generation, the runtimeVM.updateContainerStatus() call was failing because no shim was found - for a good reason, since the container is deleted at this point. This generated an error that prevented the actual event generation.
    My code is catching this situation and ignores the error, letting the rest of the event generation proceed as usual.
    @fidencio could you get a look here? There may be simpler/cleaner to do.

I still consider this WIP. Any suggestion is welcome.

@harche
Copy link
Contributor

harche commented Jan 17, 2023

ContainerEventType_CONTAINER_STOPPED_EVENT interpreted within Kubelet PLEG as ContainerDied which is to say that, the container has run to completion. Container running to completion is not same as container that has been manually deleted.

Any changes in this code path has to be backed by passing node e2e in upstream k/k. Evented PLEG has been added very recently and is in alpha (feature gated). So far we run those tests manually, but @sairameshv is in the process of adding CI jobs to various repositories (k/k, crio, openshift/release) that use Evented PLEG.

@sairameshv does the redis container ever run to completion?

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 17, 2023

@littlejawa: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/ci-rhel-integration cf4c22d link true /test ci-rhel-integration
ci/prow/ci-e2e-conmonrs cf4c22d link true /test ci-e2e-conmonrs
ci/prow/ci-rhel-e2e cf4c22d link true /test ci-rhel-e2e
ci/prow/e2e-gcp-ovn cf4c22d link true /test e2e-gcp-ovn

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@sairameshv
Copy link
Member

ContainerEventType_CONTAINER_STOPPED_EVENT interpreted within Kubelet PLEG as ContainerDied which is to say that, the container has run to completion. Container running to completion is not same as container that has been manually deleted.

Any changes in this code path has to be backed by passing node e2e in upstream k/k. Evented PLEG has been added very recently and is in alpha (feature gated). So far we run those tests manually, but @sairameshv is in the process of adding CI jobs to various repositories (k/k, crio, openshift/release) that use Evented PLEG.

@sairameshv does the redis container ever run to completion?

@harche @littlejawa I tested the ContainerEventType_CONTAINER_STOPPED_EVENT related change by creating a sample busybox pod that completes after running for 10s.
Pod config as follows:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: test-pod
      image: docker.io/library/busybox:latest
      command: ["sleep"]
      args: ["10"]
  restartPolicy: Never

Anyway, I also observed that this piece of code doesn't get hit if a container is stopped (or a pod runs into completed state) on its own. This line is hit when a pod is manually deleted and hence, I feel the ContainerEventType_CONTAINER_DELETED_EVENT event is well suited instead of ContainerEventType_CONTAINER_STOPPED_EVENT WDYT?

@littlejawa
Copy link
Contributor Author

Anyway, I also observed that this piece of code doesn't get hit if a container is stopped (or a pod runs into completed state) on its own. This line is hit when a pod is manually deleted and hence, I feel the ContainerEventType_CONTAINER_DELETED_EVENT event is well suited instead of ContainerEventType_CONTAINER_STOPPED_EVENT WDYT?

Just to be sure: the test is failing on the "wait_for_log" part in the code below:

crictl stop "$ctr_id"
wait_for_log "Container event CONTAINER_STOPPED_EVENT generated"

This is on a crictl stop call. So the code I modified is reached when you stop the container without deleting it, right?
Further down, the test does crictl rm and expects CONTAINER_DELETED_EVENT - and it gets it.
Am I missing something ?

I'm fine fixing the test rather than the code, I just want to make sure I understand.

@littlejawa
Copy link
Contributor Author

@harche @sairameshv -
I am still trying to understand what needs to be done here.
The following is based on the test you added in ctr.bats, and testing with regular containers (with runc).

The test does the following :

        [...]
	crictl stop "$ctr_id"
	wait_for_log "Container event CONTAINER_STOPPED_EVENT generated"
	output=$(crictl ps --quiet --state exited)
	[[ "$output" == "$ctr_id" ]]

	crictl rm "$ctr_id"
	wait_for_log "Container event CONTAINER_DELETED_EVENT generated"
        [...]

During my test, I can see that crictl stop generates two events:

Then crictl rm generates another CONTAINER_DELETED_EVENT, from container_remove.go:46

When doing the same test with kata containers, the first event is not generated. This is because there is no exit detected - in kata, the container runs within a VM, so I guess cri-o can't detect the end of the process.
In other word, with kata, we have two CONTAINER_DELETED_EVENT, but no CONTAINER_STOPPED_EVENT.
This is why I made this modification: I modified the first occurrence of CONTAINER_DELETED_EVENT with CONTAINER_STOPPED_EVENT, and this made the test happy, both for kata and runc.

Now I don't know if the fix I'm suggesting makes sense for the feature being worked on. It still feels strange to me that crictl stop generates two different events, and that the CONTAINER_DELETED_EVENT appear before we do crictl rm, but I can't decide what's really needed by Kubelet PLEG.

If you want to keep the CONTAINER_DELETED_EVENT in container_stop.go, then I need to generate the CONTAINER_STOPPED_EVENT from a different place for kata containters (within the runtime_vm.go file probably).
This is something we need to decide.

Also, I think the test has a bug: as both CONTAINER_STOPPED_EVENT and CONTAINER_DELETED_EVENT are generated from the crictl stop command, the test succeeds even if CONTAINER_DELETED_EVENT is NOT generated with crictl rm (because it's already in the log anyway). I have verified that by removing the event generation from container_remove.go - the test still succeeds.

@harche
Copy link
Contributor

harche commented Jan 26, 2023

Container running to completion must generate CONTAINER_STOPPED_EVENT. In evented pleg, since kubelet is not doing periodic relisting frequently it has no way to know if the container had run to completion or not in timely manner.

If while using kata we are not generating CONTAINER_STOPPED_EVENT when a container running to completion then that needs to be fixed.

About the validity of the test @sairameshv is going to look into it.

@sairameshv
Copy link
Member

Hey @littlejawa , I agree with your change.
Sorry for the delayed response as I was occupied by some other work.
Moreover, I verified your change by running the Node E2E tests(enabling the Evented PLEG feature) and I didn't observe any failure as follows:

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>                              START TEST                                >
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Start Test Suite on Host ramesh-fedora-coreos-37-20230110-3-1-gcp-x86-64
Running Suite: E2eNode Suite - /tmp/node-e2e-20230127T212648
============================================================
Random Seed: 1674835041 - will randomize all specs

Will run 183 of 384 specs
Running in parallel across 8 processes
  W0127 15:57:22.521168    2720 test_context.go:497] Unable to find in-cluster config, using default host : https://127.0.0.1:6443
  I0127 15:57:22.521338    2720 test_context.go:514] Tolerating taints "node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master" when considering if nodes are ready
  Jan 27 15:57:22.521: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
  I0127 15:57:22.521598    2720 feature_gate.go:249] feature gates: &{map[]}
Validating os...
OS: Linux
Validating kernel...
KERNEL_VERSION: 6.0.18-300.fc37.x86_64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_PIDS: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_FAIR_GROUP_SCHED: enabled
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
CONFIG_CFS_BANDWIDTH: enabled
CONFIG_CGROUP_HUGETLB: enabled
CONFIG_SECCOMP: enabled
CONFIG_SECCOMP_FILTER: enabled
Validating cgroups...
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
Validating package...
PASS
S
------------------------------
[SynchronizedBeforeSuite] PASSED [1503.599 seconds]
[SynchronizedBeforeSuite] 
test/e2e_node/e2e_node_suite_test.go:197

  Captured StdOut/StdErr Output >>
    W0127 15:57:22.521168    2720 test_context.go:497] Unable to find in-cluster config, using default host : https://127.0.0.1:6443
    I0127 15:57:22.521338    2720 test_context.go:514] Tolerating taints "node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master" when considering if nodes are ready
    Jan 27 15:57:22.521: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
    I0127 15:57:22.521598    2720 feature_gate.go:249] feature gates: &{map[]}
  Validating os...
  OS: Linux
  Validating kernel...
  KERNEL_VERSION: 6.0.18-300.fc37.x86_64
  CONFIG_NAMESPACES: enabled
  CONFIG_NET_NS: enabled
  CONFIG_PID_NS: enabled
  CONFIG_IPC_NS: enabled
  CONFIG_UTS_NS: enabled
  CONFIG_CGROUPS: enabled
  CONFIG_CGROUP_CPUACCT: enabled
  CONFIG_CGROUP_DEVICE: enabled
  CONFIG_CGROUP_FREEZER: enabled
  CONFIG_CGROUP_PIDS: enabled
  CONFIG_CGROUP_SCHED: enabled
  CONFIG_CPUSETS: enabled
  CONFIG_MEMCG: enabled
  CONFIG_INET: enabled
  CONFIG_EXT4_FS: enabled
  CONFIG_PROC_FS: enabled
  CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
  CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
  CONFIG_FAIR_GROUP_SCHED: enabled
  CONFIG_OVERLAY_FS: enabled (as module)
  CONFIG_AUFS_FS: not set - Required for aufs.
  CONFIG_BLK_DEV_DM: enabled
  CONFIG_CFS_BANDWIDTH: enabled
  CONFIG_CGROUP_HUGETLB: enabled
  CONFIG_SECCOMP: enabled
  CONFIG_SECCOMP_FILTER: enabled
  Validating cgroups...
  CGROUPS_CPU: enabled
  CGROUPS_CPUACCT: enabled
  CGROUPS_CPUSET: enabled
  CGROUPS_DEVICES: enabled
  CGROUPS_FREEZER: enabled
  CGROUPS_MEMORY: enabled
  CGROUPS_PIDS: enabled
  CGROUPS_HUGETLB: enabled
  CGROUPS_BLKIO: enabled
  Validating package...
  PASS
  << Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSSS•SS••S•••S•S••S•••SSSS•••S••SSSSSS••SS••S•SSS•••S••S••••SS•••S•SS•S••SSS•S•S••SS••SSSSS•S•SSS•••SS•S•••••S•S•••SSS•S••S•SS••S•SS••••S•S••••SSS••SS•S•••SS•S•SSSS•S•SSSSS•SSSS••SS•SS•••••S•SSSS••S•SSSSSSSSS•••S•S••SS••S•S••S•SSS••S•S••S•S•SS•SSSS•SSSS•S••SSSSSS•SSSSS•S•SSS•••SS•••S•S•S••SS•SS•••S•SSSSS••S•SS•S•S•SSS•S•••S••S••S••S•••S•S••S•••S••S•S•S•••SSSS•S•SS•••••••••

Ran 183 of 384 Specs in 2065.852 seconds
SUCCESS! -- 183 Passed | 0 Failed | 0 Pending | 201 Skipped


Ginkgo ran 1 suite in 34m26.727883719s
Test Suite Passed
You're using deprecated Ginkgo functionality:
=============================================
  --untilItFails is deprecated, use --until-it-fails instead
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.7.0


Success Finished Test Suite on Host ramesh-fedora-coreos-37-20230110-3-1-gcp-x86-64
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<                              FINISH TEST                               <
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

So, LGTM
cc: @harche

@harche
Copy link
Contributor

harche commented Jan 27, 2023

@sairameshv this is on hold because StopContainer is not where we detect that container has run to completion. CONTAINER_STOPPED_EVENT should only be emitted when the CRI runtime detects that the container has run to completion.

@sairameshv
Copy link
Member

@sairameshv this is on hold because StopContainer is not where we detect that container has run to completion. CONTAINER_STOPPED_EVENT should only be emitted when the CRI runtime detects that the container has run to completion.

Okay

@littlejawa
Copy link
Contributor Author

littlejawa commented Jan 30, 2023

this is on hold because StopContainer is not where we detect that container has run to completion. CONTAINER_STOPPED_EVENT should only be emitted when the CRI runtime detects that the container has run to completion.

@harche - I'm sorry, I'm confused again - still trying to understand.
Maybe I'm not clear on what "completion" means in this context.
Let's forget about kata for a moment.

The only places in cri-o where CONTAINER_STOPPED_EVENT is generated are :

  • server.go:899, during handleExit(). There's no check on the status of the container, the only other exit paths are for internal errors. To me, it means this event is generated whenever the container exits, whether or not it goes to completion?
  • sandbox_stop_linux.go:97 for the sandbox only, when the stopPodSandbox() function is called. Again, I'm not sure it takes the completion into account, it just stops everything.

Shouldn't we check the status of the container in handleExit() before sending the event, if the event depends on the completion?
Or am I missing something about what the "completion" status is?

The fact that we generate CONTAINER_STOPPED_EVENT on stopPodSandbox() and not stopContainer() is part of why I made this change in the first place - I expected them to be symmetric somehow.
If we're not supposed to send CONTAINER_STOPPED_EVENT on stopContainer(), should we remove it from stopPodSandbox() too? If not, why?

I'm sorry if these are dumb questions, but I'm really trying to understand what's expected before changing anything for the kata-specific use case.

@harche
Copy link
Contributor

harche commented Jan 30, 2023

The only places in cri-o where CONTAINER_STOPPED_EVENT is generated are :

  • server.go:899, during handleExit(). There's no check on the status of the container, the only other exit paths are for internal errors. To me, it means this event is generated whenever the container exits, whether or not it goes to completion?

handleExit() handles containers that CRIO thinks have exited (in other words run to completion). It handles both regular containers as well as sandboxes that have run to completion.

  • sandbox_stop_linux.go:97 for the sandbox only, when the stopPodSandbox() function is called. Again, I'm not sure it takes the completion into account, it just stops everything.

@sairameshv This looks like a bug. We shouldn't really generate ContainerEventType_CONTAINER_STOPPED_EVENT in that file. That's not where we detect the sandbox has run to completion (exited). handleExit() function gets invoked when crio gets notified (using fsnotify) that the container has exited and we are already emitting ContainerEventType_CONTAINER_STOPPED_EVENT there.

Shouldn't we check the status of the container in handleExit() before sending the event, if the event depends on the completion? Or am I missing something about what the "completion" status is?

Container running to completion or exited mean same thing. In broad sense, it's just means that the container has terminated itself without kubelet asking for that action. The runtime has already determined that container has terminated (with the help of fsnotify). We just want to convey that information to the kubelet when Evented PLEG is turned on.

The fact that we generate CONTAINER_STOPPED_EVENT on stopPodSandbox() and not stopContainer() is part of why I made this change in the first place - I expected them to be symmetric somehow. If we're not supposed to send CONTAINER_STOPPED_EVENT on stopContainer(), should we remove it from stopPodSandbox() too? If not, why?

stopPodSandbox also should not emit CONTAINER_STOPPED_EVENT.

Also, stop container should not emit CONTAINER_DELETED_EVENT, it should be only emitted from container_remove.go

I'm sorry if these are dumb questions, but I'm really trying to understand what's expected before changing anything for the kata-specific use case.

No problem. Evented PLEG is a very new feature and you are already helping us discover the bugs in our current implemention. So thanks a lot for that.

@littlejawa
Copy link
Contributor Author

I think I have an understanding of what's needed for the kata use case. I'll make a separate PR for it though, as the fix will be specific to runtime_vm implementation, and not directly related to everything discussed here.
Thanks for all the info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/bug Categorizes issue or PR as related to a bug. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants