Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@ayushpatil2122
Copy link

@ayushpatil2122 ayushpatil2122 commented Mar 6, 2025

What this PR does

Before this PR:
virt-handler became CrashLoopBackOff due to invalid memory address or nil pointer dereference

After this PR:
handled the nil pointer dereference

Fixes #14064

Why we need it and why it was done in this way

The following tradeoffs were made:

The following alternatives were considered:

Links to places where the discussion took place:

Special notes for your reviewer

Checklist

This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.

Release note

   handle nil pointer dereference in cellToCell

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. sig/compute labels Mar 6, 2025
@kubevirt-bot
Copy link
Contributor

Hi @ayushpatil2122. Thanks for your PR.

PRs from untrusted users cannot be marked as trusted with /ok-to-test in this repo meaning untrusted PR authors can never trigger tests themselves. Collaborators can still trigger tests on the PR using /test all.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@dosubot dosubot bot added kind/bug priority/critical-urgent Categorizes an issue or pull request as critical and of urgent priority. labels Mar 6, 2025
Comment on lines 46 to 118
It("should panic when NUMA cells are nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = nil

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

It("should panic when NUMA cell memory is nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = &libvirtxml.CapsHostNUMACells{
Cells: []libvirtxml.CapsHostNUMACell{
{
Memory: nil, // This should cause a panic
PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
{Unit: memoryUnit, Size: 4, Count: 4064224},
},
Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
{ID: 0, Siblings: "0,4"},
}},
},
},
}

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

It("should panic when NUMA cell CPUs are nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = &libvirtxml.CapsHostNUMACells{
Cells: []libvirtxml.CapsHostNUMACell{
{
Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
{Unit: memoryUnit, Size: 4, Count: 4064224},
},
Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
CPUS: nil, // This should cause a panic
},
},
}

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

It("should panic when NUMA cell distances are nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = &libvirtxml.CapsHostNUMACells{
Cells: []libvirtxml.CapsHostNUMACell{
{
Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
{Unit: memoryUnit, Size: 4, Count: 4064224},
},
Distances: nil, // This should cause a panic
CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
{ID: 0, Siblings: "0,4"},
}},
},
},
}

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can merge this into a DescribeTable and pass the various struct as arguments

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @alicefr suggested, this can be:

		DescribeTable("should not panic", func(cells *libvirtxml.CapsHostNUMACells) {
			caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
			caps.Host.NUMA.Cells = cells

			Expect(func() {
				capabilitiesToTopology(caps)
			}).ToNot(Panic())

		},
			Entry("when NUMA cells are nil", nil),
			Entry("when NUMA cell memory is nil", &libvirtxml.CapsHostNUMACells{
				Cells: []libvirtxml.CapsHostNUMACell{
					{
						Memory: nil, // This should cause a panic
						PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
							{Unit: memoryUnit, Size: 4, Count: 4064224},
						},
						Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
						CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
							{ID: 0, Siblings: "0,4"},
						}},
					},
				}}),
			Entry("when NUMA cell CPUs are nil", &libvirtxml.CapsHostNUMACells{
				Cells: []libvirtxml.CapsHostNUMACell{
					{
						Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
						PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
							{Unit: memoryUnit, Size: 4, Count: 4064224},
						},
						Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
						CPUS:      nil, // This should cause a panic
					},
				},
			}),
			Entry("panic when NUMA cell distances are nil", &libvirtxml.CapsHostNUMACells{
				Cells: []libvirtxml.CapsHostNUMACell{
					{
						Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
						PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
							{Unit: memoryUnit, Size: 4, Count: 4064224},
						},
						Distances: nil, // This should cause a panic
						CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
							{ID: 0, Siblings: "0,4"},
						}},
					},
				},
			}),
		)

NOTE
We want this NOT to panic, the assertion and test description were wrong

@alicefr
Copy link
Member

alicefr commented Mar 7, 2025

@ayushpatil2122 I will repeat here my previous comments, can you please verify that this actually fixes the original issue #14064 . Otherwise, we don't actually know if this fix the issue

@mohit-nagaraj
Copy link
Contributor

Also as @alicefr mentioned were you able to check if it actually occurs by testing it out in minikube?

@groundsada
Copy link

Yes, please approve this. The cluster I'm working on is facing the same issues, and hopefully this will fix it 🤞

@victortoso
Copy link
Member

If the segfault is due to configuration options (AFAIR) it should be possible to create a reproducer with the test framework. You could do something like:

  • First commit, introduce the reproducer as a test. Note that the test should not fail, that is, you'll use something like Should(Panic())).
  • Second commit, apply the fix and change the test with Should(Succeed()) or something similar.

This is hitting a lot of users so please do let us know if you don't have the time to work on it.
Cheers,

@ayushpatil2122
Copy link
Author

If the segfault is due to configuration options (AFAIR) it should be possible to create a reproducer with the test framework. You could do something like:

  • First commit, introduce the reproducer as a test. Note that the test should not fail, that is, you'll use something like Should(Panic())).
  • Second commit, apply the fix and change the test with Should(Succeed()) or something similar.

This is hitting a lot of users so please do let us know if you don't have the time to work on it. Cheers,

I written a unit test as well to reproduce this issue

@aidanleuck
Copy link

aidanleuck commented Mar 20, 2025

Any chance this will get backported into the 1.5 release once merged? I can't test 1.5 locally without a full server environment.

Edit: I cloned down this branch and tested it with my K3D cluster and it solves my issue. I haven't tested with Minkube, but I had the same stack trace so I think this PR looks good

@groundsada
Copy link

@victortoso are there plans for this to get merged? I have 200/420 nodes breaking with the same errors. Can you push this through, please 🙏?

func capabilitiesToTopology(capabilities *libvirtxml.Caps) *cmdv1.Topology {
topology := &cmdv1.Topology{}
if capabilities == nil {
if capabilities == nil || capabilities.Host.NUMA == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My feeling about this patch is that it does not solve the problem. It just adds guards to avoid a segfault.
It wasn't nil before, why it is now? When did this happen?

The guards aren't wrong and the patch might as well avoid a segfault but I would love to know why it became a problem now.

Copy link

@aidanleuck aidanleuck Mar 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commit looks like the culprit.
5cb6bd3.

Initial thoughts are there is a difference in how the XML is being unmarshaled with the switch to libvirt. I will try cherry picking this commit to 1.3 since the issue does not appear on that branch and let you know if the error reproduces itself. I will update this post as I learn more.

@aidanleuck
Copy link

aidanleuck commented Mar 21, 2025

Alright, my previous thought about this commit being the issue seems to be correct. @victortoso

I compared the capabilites.xml between Kubevirt 1.3 and Kubevirt 1.5 and they are equivalent. So, there are no unexpected changes to the XML format from the libvirt side. The XML output by the node-labeller on my cluster for the topology section is as follows.

 <topology>
      <cells num='1'>
        <cell id='0'>
          <memory unit='KiB'>20482408</memory>
          <cpus num='20'>
            <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0-1'/>
            <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0-1'/>
            <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='2-3'/>
            <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='2-3'/>
            <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='4-5'/>
            <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='4-5'/>
            <cpu id='6' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='6-7'/>
            <cpu id='7' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='6-7'/>
            <cpu id='8' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='8-9'/>
            <cpu id='9' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='8-9'/>
            <cpu id='10' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='10-11'/>
            <cpu id='11' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='10-11'/>
            <cpu id='12' socket_id='0' die_id='0' cluster_id='0' core_id='6' siblings='12-13'/>
            <cpu id='13' socket_id='0' die_id='0' cluster_id='0' core_id='6' siblings='12-13'/>
            <cpu id='14' socket_id='0' die_id='0' cluster_id='0' core_id='7' siblings='14-15'/>
            <cpu id='15' socket_id='0' die_id='0' cluster_id='0' core_id='7' siblings='14-15'/>
            <cpu id='16' socket_id='0' die_id='0' cluster_id='0' core_id='8' siblings='16-17'/>
            <cpu id='17' socket_id='0' die_id='0' cluster_id='0' core_id='8' siblings='16-17'/>
            <cpu id='18' socket_id='0' die_id='0' cluster_id='0' core_id='9' siblings='18-19'/>
            <cpu id='19' socket_id='0' die_id='0' cluster_id='0' core_id='9' siblings='18-19'/>
          </cpus>
        </cell>
      </cells>
    </topology>

In the commit above, options.go now uses libvirts XML model as opposed to the one located here https://github.com/kubevirt/kubevirt/blob/v1.3.0/pkg/virt-handler/node-labeller/api/capabilities.go.

You can see the distances struct is an slice of siblings but not a pointer.

type Distances struct {
	Sibling []Sibling `xml:"sibling"`
}

In libvirts model it is a pointer to a structure, not an slice. See https://pkg.go.dev/github.com/libvirt/libvirt-go-xml#CapsHostNUMACell. In the XML that I provided above there is no distances entry underneath the cell. If there were a distances entry there would be something like the following

 <topology>
      <cells num='1'>
        <cell id='0'>
          <distances>
           .......
         </distances>
        </cell>
      </cells>
    </topology>

Since libvirt uses a pointer instead of an slice, the structure will be nil when the distances entry does not exist during unmarshaling. Previously, distances were an slice, so when unmarshaled, it would be initialized as empty, preventing a segmentation fault during the loop.

Some of the nil checks in this PR may be redundant as the only line I had to touch was options.go:98 in order to fix the segfault. However, I think adding the extra nil checks is a good idea since the pointers could potentially be nil.

if cell.Distances != nil {
	for _, distance := range cell.Distances.Siblings {
		c.Distances = append(c.Distances, distanceToDistance(distance))
	}
}

As a sanity check to make sure this commit is the problem I also cherry picked it and applied it to Kubevirt 1.3, a version before this change was introduced. I was able to reproduce the same stack trace.

git checkout v1.3.0
git cherry-pick 5cb6bd3181634cdcba1faa29204bc5a410b55a0b
make && make push

stacktrace (Kubevirt 1.3 with commit 5cb6bd3):

E0321 14:49:45.939269   12401 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 302 [running]:
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1ff0080, 0x3a662a0})
	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x85
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000590090?})
	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x6b
panic({0x1ff0080?, 0x3a662a0?})
	GOROOT/src/runtime/panic.go:770 +0x132
kubevirt.io/kubevirt/pkg/virt-handler.cellToCell({0x0, 0xc000124330, {0x0, 0x0, 0x0}, 0x0, {0x0, 0x0, 0x0}, 0xc0006dd5c0})
	pkg/virt-handler/options.go:78 +0x1f1
kubevirt.io/kubevirt/pkg/virt-handler.capabilitiesToTopology(0xc00039ae60)
	pkg/virt-handler/options.go:60 +0xd8
kubevirt.io/kubevirt/pkg/virt-handler.virtualMachineOptions(0xc0000e2050, 0xa, {0x0, 0x0, 0x0}, 0xc001988d00?, 0xc001991200, 0xc00021fd40)
	pkg/virt-handler/options.go:27 +0x59
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).vmUpdateHelperDefault(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3182 +0x1685
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).processVmUpdate(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3320 +0x212
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).defaultExecute(0xc0002d2d00, {0xc0007c0258, 0x16}, 0xc0006de608, 0x1, 0x0, 0x0)
	pkg/virt-handler/vm.go:2042 +0x3098
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).execute(0xc0002d2d00, {0xc0007c0258, 0x16})
	pkg/virt-handler/vm.go:2157 +0xa25
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Execute(0xc0002d2d00)
	pkg/virt-handler/vm.go:1712 +0xf2
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).runWorker(...)
	pkg/virt-handler/vm.go:1702
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 +0x33
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001876040, {0x26ec380, 0xc0017007e0}, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 +0xaf
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001876040, 0x3b9aca00, 0x0, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 +0x7f
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161
created by kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Run in goroutine 281
	pkg/virt-handler/vm.go:1693 +0x77c
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1d082f1]

goroutine 302 [running]:
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000590090?})
	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xcd
panic({0x1ff0080?, 0x3a662a0?})
	GOROOT/src/runtime/panic.go:770 +0x132
kubevirt.io/kubevirt/pkg/virt-handler.cellToCell({0x0, 0xc000124330, {0x0, 0x0, 0x0}, 0x0, {0x0, 0x0, 0x0}, 0xc0006dd5c0})
	pkg/virt-handler/options.go:78 +0x1f1
kubevirt.io/kubevirt/pkg/virt-handler.capabilitiesToTopology(0xc00039ae60)
	pkg/virt-handler/options.go:60 +0xd8
kubevirt.io/kubevirt/pkg/virt-handler.virtualMachineOptions(0xc0000e2050, 0xa, {0x0, 0x0, 0x0}, 0xc001988d00?, 0xc001991200, 0xc00021fd40)
	pkg/virt-handler/options.go:27 +0x59
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).vmUpdateHelperDefault(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3182 +0x1685
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).processVmUpdate(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3320 +0x212
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).defaultExecute(0xc0002d2d00, {0xc0007c0258, 0x16}, 0xc0006de608, 0x1, 0x0, 0x0)
	pkg/virt-handler/vm.go:2042 +0x3098
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).execute(0xc0002d2d00, {0xc0007c0258, 0x16})
	pkg/virt-handler/vm.go:2157 +0xa25
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Execute(0xc0002d2d00)
	pkg/virt-handler/vm.go:1712 +0xf2
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).runWorker(...)
	pkg/virt-handler/vm.go:1702
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 +0x33
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001876040, {0x26ec380, 0xc0017007e0}, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 +0xaf
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001876040, 0x3b9aca00, 0x0, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 +0x7f
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161
created by kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Run in goroutine 281
	pkg/virt-handler/vm.go:1693 +0x77c

@fossedihelm
Copy link
Contributor

Alright, my previous thought about this commit being the issue seems to be correct. @victortoso

I compared the capabilites.xml between Kubevirt 1.3 and Kubevirt 1.5 and they are equivalent. So, there are no unexpected changes to the XML format from the libvirt side. The XML output by the node-labeller on my cluster for the topology section is as follows.

 <topology>
      <cells num='1'>
        <cell id='0'>
          <memory unit='KiB'>20482408</memory>
          <cpus num='20'>
            <cpu id='0' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0-1'/>
            <cpu id='1' socket_id='0' die_id='0' cluster_id='0' core_id='0' siblings='0-1'/>
            <cpu id='2' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='2-3'/>
            <cpu id='3' socket_id='0' die_id='0' cluster_id='0' core_id='1' siblings='2-3'/>
            <cpu id='4' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='4-5'/>
            <cpu id='5' socket_id='0' die_id='0' cluster_id='0' core_id='2' siblings='4-5'/>
            <cpu id='6' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='6-7'/>
            <cpu id='7' socket_id='0' die_id='0' cluster_id='0' core_id='3' siblings='6-7'/>
            <cpu id='8' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='8-9'/>
            <cpu id='9' socket_id='0' die_id='0' cluster_id='0' core_id='4' siblings='8-9'/>
            <cpu id='10' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='10-11'/>
            <cpu id='11' socket_id='0' die_id='0' cluster_id='0' core_id='5' siblings='10-11'/>
            <cpu id='12' socket_id='0' die_id='0' cluster_id='0' core_id='6' siblings='12-13'/>
            <cpu id='13' socket_id='0' die_id='0' cluster_id='0' core_id='6' siblings='12-13'/>
            <cpu id='14' socket_id='0' die_id='0' cluster_id='0' core_id='7' siblings='14-15'/>
            <cpu id='15' socket_id='0' die_id='0' cluster_id='0' core_id='7' siblings='14-15'/>
            <cpu id='16' socket_id='0' die_id='0' cluster_id='0' core_id='8' siblings='16-17'/>
            <cpu id='17' socket_id='0' die_id='0' cluster_id='0' core_id='8' siblings='16-17'/>
            <cpu id='18' socket_id='0' die_id='0' cluster_id='0' core_id='9' siblings='18-19'/>
            <cpu id='19' socket_id='0' die_id='0' cluster_id='0' core_id='9' siblings='18-19'/>
          </cpus>
        </cell>
      </cells>
    </topology>

In the commit above, options.go now uses libvirts XML model as opposed to the one located here v1.3.0/pkg/virt-handler/node-labeller/api/capabilities.go.

You can see the distances struct is an slice of siblings but not a pointer.

type Distances struct {
	Sibling []Sibling `xml:"sibling"`
}

In libvirts model it is a pointer to a structure, not an slice. See pkg.go.dev/github.com/libvirt/libvirt-go-xml#CapsHostNUMACell. In the XML that I provided above there is no distances entry underneath the cell. If there were a distances entry there would be something like the following

 <topology>
      <cells num='1'>
        <cell id='0'>
          <distances>
           .......
         </distances>
        </cell>
      </cells>
    </topology>

Since libvirt uses a pointer instead of an slice, the structure will be nil when the distances entry does not exist during unmarshaling. Previously, distances were an slice, so when unmarshaled, it would be initialized as empty, preventing a segmentation fault during the loop.

Some of the nil checks in this PR may be redundant as the only line I had to touch was options.go:98 in order to fix the segfault. However, I think adding the extra nil checks is a good idea since the pointers could potentially be nil.

if cell.Distances != nil {
	for _, distance := range cell.Distances.Siblings {
		c.Distances = append(c.Distances, distanceToDistance(distance))
	}
}

As a sanity check to make sure this commit is the problem I also cherry picked it and applied it to Kubevirt 1.3, a version before this change was introduced. I was able to reproduce the same stack trace.

git checkout v1.3.0
git cherry-pick 5cb6bd3181634cdcba1faa29204bc5a410b55a0b
make && make push

stacktrace (Kubevirt 1.3 with commit 5cb6bd3):

E0321 14:49:45.939269   12401 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 302 [running]:
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1ff0080, 0x3a662a0})
	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x85
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000590090?})
	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x6b
panic({0x1ff0080?, 0x3a662a0?})
	GOROOT/src/runtime/panic.go:770 +0x132
kubevirt.io/kubevirt/pkg/virt-handler.cellToCell({0x0, 0xc000124330, {0x0, 0x0, 0x0}, 0x0, {0x0, 0x0, 0x0}, 0xc0006dd5c0})
	pkg/virt-handler/options.go:78 +0x1f1
kubevirt.io/kubevirt/pkg/virt-handler.capabilitiesToTopology(0xc00039ae60)
	pkg/virt-handler/options.go:60 +0xd8
kubevirt.io/kubevirt/pkg/virt-handler.virtualMachineOptions(0xc0000e2050, 0xa, {0x0, 0x0, 0x0}, 0xc001988d00?, 0xc001991200, 0xc00021fd40)
	pkg/virt-handler/options.go:27 +0x59
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).vmUpdateHelperDefault(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3182 +0x1685
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).processVmUpdate(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3320 +0x212
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).defaultExecute(0xc0002d2d00, {0xc0007c0258, 0x16}, 0xc0006de608, 0x1, 0x0, 0x0)
	pkg/virt-handler/vm.go:2042 +0x3098
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).execute(0xc0002d2d00, {0xc0007c0258, 0x16})
	pkg/virt-handler/vm.go:2157 +0xa25
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Execute(0xc0002d2d00)
	pkg/virt-handler/vm.go:1712 +0xf2
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).runWorker(...)
	pkg/virt-handler/vm.go:1702
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 +0x33
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001876040, {0x26ec380, 0xc0017007e0}, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 +0xaf
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001876040, 0x3b9aca00, 0x0, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 +0x7f
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161
created by kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Run in goroutine 281
	pkg/virt-handler/vm.go:1693 +0x77c
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1d082f1]

goroutine 302 [running]:
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000590090?})
	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xcd
panic({0x1ff0080?, 0x3a662a0?})
	GOROOT/src/runtime/panic.go:770 +0x132
kubevirt.io/kubevirt/pkg/virt-handler.cellToCell({0x0, 0xc000124330, {0x0, 0x0, 0x0}, 0x0, {0x0, 0x0, 0x0}, 0xc0006dd5c0})
	pkg/virt-handler/options.go:78 +0x1f1
kubevirt.io/kubevirt/pkg/virt-handler.capabilitiesToTopology(0xc00039ae60)
	pkg/virt-handler/options.go:60 +0xd8
kubevirt.io/kubevirt/pkg/virt-handler.virtualMachineOptions(0xc0000e2050, 0xa, {0x0, 0x0, 0x0}, 0xc001988d00?, 0xc001991200, 0xc00021fd40)
	pkg/virt-handler/options.go:27 +0x59
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).vmUpdateHelperDefault(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3182 +0x1685
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).processVmUpdate(0xc0002d2d00, 0xc0006de608, 0x0)
	pkg/virt-handler/vm.go:3320 +0x212
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).defaultExecute(0xc0002d2d00, {0xc0007c0258, 0x16}, 0xc0006de608, 0x1, 0x0, 0x0)
	pkg/virt-handler/vm.go:2042 +0x3098
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).execute(0xc0002d2d00, {0xc0007c0258, 0x16})
	pkg/virt-handler/vm.go:2157 +0xa25
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Execute(0xc0002d2d00)
	pkg/virt-handler/vm.go:1712 +0xf2
kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).runWorker(...)
	pkg/virt-handler/vm.go:1702
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 +0x33
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001876040, {0x26ec380, 0xc0017007e0}, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 +0xaf
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001876040, 0x3b9aca00, 0x0, 0x1, 0xc0003be7e0)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 +0x7f
kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161
created by kubevirt.io/kubevirt/pkg/virt-handler.(*VirtualMachineController).Run in goroutine 281
	pkg/virt-handler/vm.go:1693 +0x77c

Many thanks for the deep analysis.

Copy link
Contributor

@fossedihelm fossedihelm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!
Please see my comments below.
Do not forget to run make && make generate before pushing changes.

Comment on lines 46 to 118
It("should panic when NUMA cells are nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = nil

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

It("should panic when NUMA cell memory is nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = &libvirtxml.CapsHostNUMACells{
Cells: []libvirtxml.CapsHostNUMACell{
{
Memory: nil, // This should cause a panic
PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
{Unit: memoryUnit, Size: 4, Count: 4064224},
},
Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
{ID: 0, Siblings: "0,4"},
}},
},
},
}

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

It("should panic when NUMA cell CPUs are nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = &libvirtxml.CapsHostNUMACells{
Cells: []libvirtxml.CapsHostNUMACell{
{
Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
{Unit: memoryUnit, Size: 4, Count: 4064224},
},
Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
CPUS: nil, // This should cause a panic
},
},
}

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

It("should panic when NUMA cell distances are nil", func() {
caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
caps.Host.NUMA.Cells = &libvirtxml.CapsHostNUMACells{
Cells: []libvirtxml.CapsHostNUMACell{
{
Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
{Unit: memoryUnit, Size: 4, Count: 4064224},
},
Distances: nil, // This should cause a panic
CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
{ID: 0, Siblings: "0,4"},
}},
},
},
}

Expect(func() {
capabilitiesToTopology(caps)
}).To(Panic())
})

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @alicefr suggested, this can be:

		DescribeTable("should not panic", func(cells *libvirtxml.CapsHostNUMACells) {
			caps := &libvirtxml.Caps{Host: libvirtxml.CapsHost{NUMA: &libvirtxml.CapsHostNUMATopology{}}}
			caps.Host.NUMA.Cells = cells

			Expect(func() {
				capabilitiesToTopology(caps)
			}).ToNot(Panic())

		},
			Entry("when NUMA cells are nil", nil),
			Entry("when NUMA cell memory is nil", &libvirtxml.CapsHostNUMACells{
				Cells: []libvirtxml.CapsHostNUMACell{
					{
						Memory: nil, // This should cause a panic
						PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
							{Unit: memoryUnit, Size: 4, Count: 4064224},
						},
						Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
						CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
							{ID: 0, Siblings: "0,4"},
						}},
					},
				}}),
			Entry("when NUMA cell CPUs are nil", &libvirtxml.CapsHostNUMACells{
				Cells: []libvirtxml.CapsHostNUMACell{
					{
						Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
						PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
							{Unit: memoryUnit, Size: 4, Count: 4064224},
						},
						Distances: &libvirtxml.CapsHostNUMADistances{Siblings: []libvirtxml.CapsHostNUMASibling{{ID: 0, Value: 10}}},
						CPUS:      nil, // This should cause a panic
					},
				},
			}),
			Entry("panic when NUMA cell distances are nil", &libvirtxml.CapsHostNUMACells{
				Cells: []libvirtxml.CapsHostNUMACell{
					{
						Memory: &libvirtxml.CapsHostNUMAMemory{Unit: memoryUnit, Size: 16256896},
						PageInfo: []libvirtxml.CapsHostNUMAPageInfo{
							{Unit: memoryUnit, Size: 4, Count: 4064224},
						},
						Distances: nil, // This should cause a panic
						CPUS: &libvirtxml.CapsHostNUMACPUs{CPUs: []libvirtxml.CapsHostNUMACPU{
							{ID: 0, Siblings: "0,4"},
						}},
					},
				},
			}),
		)

NOTE
We want this NOT to panic, the assertion and test description were wrong

Comment on lines 98 to 101
if cell.PageInfo != nil {
for _, page := range cell.PageInfo {
c.Pages = append(c.Pages, pageToPage(page))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iterate over nil slice is a safe non-op. Please drop this


for _, distance := range cell.Distances.Siblings {
c.Distances = append(c.Distances, distanceToDistance(distance))
if cell.Distances != nil && cell.Distances.Siblings != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iterate over nil slice is a safe non-op:

Suggested change
if cell.Distances != nil && cell.Distances.Siblings != nil {
if cell.Distances != nil {


for _, cpu := range cell.CPUS.CPUs {
c.Cpus = append(c.Cpus, cpuToCPU(cpu))
if cell.CPUS != nil && cell.CPUS.CPUs != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iterate over nil slice is a safe non-op:

Suggested change
if cell.CPUS != nil && cell.CPUS.CPUs != nil {
if cell.CPUS != nil {

func capabilitiesToTopology(capabilities *libvirtxml.Caps) *cmdv1.Topology {
topology := &cmdv1.Topology{}
if capabilities == nil {
if capabilities == nil || capabilities.Host.NUMA == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use early exit:

Suggested change
if capabilities == nil || capabilities.Host.NUMA == nil {
if capabilities == nil || capabilities.Host.NUMA == nil || capabilities.Host.NUMA.Cells == nil

and drop the changes at line 78

@fossedihelm
Copy link
Contributor

@ayushpatil2122 Please squash the two commits into one.
And also, make sure you run make && make generate and add the generated changes to the commit.
I am expecting to see just one commit.
Thank you!

Copy link
Member

@victortoso victortoso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/test pull-kubevirt-build
/test pull-kubevirt-code-lint
/test pull-kubevirt-unit-test

for _, page := range cell.PageInfo {
c.Pages = append(c.Pages, pageToPage(page))
c.Pages = append(c.Pages, pageToPage(page))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove extra space

@victortoso
Copy link
Member

10:37:59: Changes not staged for commit:
10:37:59:   (use "git add <file>..." to update what will be committed)
10:37:59:   (use "git restore <file>..." to discard changes in working directory)
10:37:59: 	modified:   pkg/virt-handler/options.go
10:37:59: 	modified:   pkg/virt-handler/options_test.go
10:37:59: 
10:37:59: no changes added to commit (use "git add" and/or "git commit -a")
make: *** [Makefile:154: build-verify] Error 1
  • Please do rebase on top of latest main.
  • Run make generate and add the changes to your commit (squashed)
  • Make sure make & make test are working with no git diff

Let me know if you hit any issues.

@ayushpatil2122 ayushpatil2122 force-pushed the issueNilPointerDereference branch from 30baa4d to e1a5ec1 Compare March 27, 2025 19:09
@victortoso
Copy link
Member

You are almost there. Let's try to get it done in the next iteration so we can move on to merge and backport this.

I've fixed the build and pushed to my review-14145 branch. You need to squash this commit to your PR.

Please do @ mention me so I can take a look as soon as you get this done. Thanks again for your work.

Signed-off-by: ayushpatil2122 <[email protected]>

Signed-off-by: ayushpatil2122 <[email protected]>

Signed-off-by: ayushpatil2122 <[email protected]>

Signed-off-by: ayushpatil2122 <[email protected]>

Signed-off-by: ayushpatil2122 <[email protected]>
@ayushpatil2122 ayushpatil2122 force-pushed the issueNilPointerDereference branch from 3aec2a4 to 078f7df Compare April 1, 2025 19:17
@ayushpatil2122
Copy link
Author

@victortoso done thanks to you

@victortoso
Copy link
Member

/test pull-kubevirt-build
/test pull-kubevirt-code-lint
/test pull-kubevirt-unit-test

@victortoso
Copy link
Member

/lgtm

Thanks again!

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Apr 1, 2025
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-windows2016
/test pull-kubevirt-e2e-kind-1.30-vgpu
/test pull-kubevirt-e2e-kind-sriov
/test pull-kubevirt-e2e-k8s-1.32-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.30-sig-network
/test pull-kubevirt-e2e-k8s-1.30-sig-storage
/test pull-kubevirt-e2e-k8s-1.30-sig-compute
/test pull-kubevirt-e2e-k8s-1.30-sig-operator
/test pull-kubevirt-e2e-k8s-1.31-sig-network
/test pull-kubevirt-e2e-k8s-1.31-sig-storage
/test pull-kubevirt-e2e-k8s-1.31-sig-compute
/test pull-kubevirt-e2e-k8s-1.31-sig-operator

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@kubevirt-bot kubevirt-bot merged commit 9fb3f6e into kubevirt:main Apr 2, 2025
39 checks passed
@victortoso
Copy link
Member

I missed the fact that we should have added the history behind this change into the commit log. My bad.
Thanks to @aidanleuck for digging into it: #14145 (comment)

The commit that introduced this issue 5cb6bd3 and is present in both v1.4.0 and v1.5.0 so we should backport it.

toso@tapioca ~/s/k/kubevirt (main)> git tag --contains 5cb6bd3181634cdcba1faa29204bc5a410b55a0b
v1.4.0
v1.4.0-alpha.1
v1.4.0-rc.0
v1.4.0-rc.1
v1.4.0-rc.2
v1.5.0
v1.5.0-alpha.0
v1.5.0-beta.0
v1.5.0-rc.0
v1.5.0-rc.1
v1.5.0-rc.2

I'm not sure I can do it, but let's try.

/cherry-pick release-1.5
/cherry-pick release-1.4

@kubevirt-bot
Copy link
Contributor

@victortoso: new pull request created: #14402

Details

In response to this:

I missed the fact that we should have added the history behind this change into the commit log. My bad.
Thanks to @aidanleuck for digging into it: #14145 (comment)

The commit that introduced this issue 5cb6bd3 and is present in both v1.4.0 and v1.5.0 so we should backport it.

toso@tapioca ~/s/k/kubevirt (main)> git tag --contains 5cb6bd3181634cdcba1faa29204bc5a410b55a0b
v1.4.0
v1.4.0-alpha.1
v1.4.0-rc.0
v1.4.0-rc.1
v1.4.0-rc.2
v1.5.0
v1.5.0-alpha.0
v1.5.0-beta.0
v1.5.0-rc.0
v1.5.0-rc.1
v1.5.0-rc.2

I'm not sure I can do it, but let's try.

/cherry-pick release-1.5
/cherry-pick release-1.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kubevirt-bot
Copy link
Contributor

@victortoso: new pull request created: #14403

Details

In response to this:

I missed the fact that we should have added the history behind this change into the commit log. My bad.
Thanks to @aidanleuck for digging into it: #14145 (comment)

The commit that introduced this issue 5cb6bd3 and is present in both v1.4.0 and v1.5.0 so we should backport it.

toso@tapioca ~/s/k/kubevirt (main)> git tag --contains 5cb6bd3181634cdcba1faa29204bc5a410b55a0b
v1.4.0
v1.4.0-alpha.1
v1.4.0-rc.0
v1.4.0-rc.1
v1.4.0-rc.2
v1.5.0
v1.5.0-alpha.0
v1.5.0-beta.0
v1.5.0-rc.0
v1.5.0-rc.1
v1.5.0-rc.2

I'm not sure I can do it, but let's try.

/cherry-pick release-1.5
/cherry-pick release-1.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@max06
Copy link

max06 commented Apr 6, 2025

@victortoso and @ayushpatil2122 thank you very much for pushing through!

Without trying to put more pressure... any idea when this gets released? 😇

@ayushpatil2122
Copy link
Author

@victortoso and @ayushpatil2122 thank you very much for pushing through!

Without trying to put more pressure... any idea when this gets released? 😇

Not sure but soon

@victortoso
Copy link
Member

Without trying to put more pressure... any idea when this gets released? 😇

@max06 this PR is target for kubevirt 1.6 release, several weeks away.
It was backported to 1.5 release and 1.4 release too. If you find any issues, let us know.

@max06
Copy link

max06 commented Apr 7, 2025

@victortoso good morning and thanks for the fast reply!

I tried setting up a fresh installation with v1.5.0 and ran into this exact issue. To me it looks like it got backported, but not released yet in a v1.5.1 for example.

@fossedihelm
Copy link
Contributor

@max06 Exactly! We are discussing it. Will let you know!

@fossedihelm
Copy link
Contributor

The plan is to create a z-release during this week.
Thank you!

@max06
Copy link

max06 commented Apr 7, 2025

Thank you very much! 🙇🏼‍♂️

@cheina97
Copy link

cheina97 commented Apr 29, 2025

The plan is to create a z-release during this week. Thank you!

Any updates?

@fossedihelm
Copy link
Contributor

Apologies, it was re-scheduled for Monday, 05

@fossedihelm
Copy link
Contributor

@max06 @cheina97 https://github.com/kubevirt/kubevirt/releases/tag/v1.5.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. kind/bug lgtm Indicates that a PR is ready to be merged. priority/critical-urgent Categorizes an issue or pull request as critical and of urgent priority. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/compute size/M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

virt-handler became CrashLoopBackOff due to invalid memory address or nil pointer dereference