Thanks to visit codestin.com
Credit goes to github.com

Skip to content

The e2e conformance tests "should serve multiport endpoints from pods" doesn't check connectivity #101446

@aojea

Description

@aojea

The test only validates that the API objects are created

Release: v1.9
Testname: Service, endpoints with multiple ports
Description: Create a service with two ports but no Pods are added to the service yet. The service MUST run and show empty set of endpoints. Add a Pod to the first port, service MUST list one endpoint for the Pod on that port. Add another Pod to the second port, service MUST list both the endpoints. Delete the first Pod and the service MUST list only the endpoint to the second Pod. Delete the second Pod and the service must now have empty set of endpoints.
*/
framework.ConformanceIt("should serve multiport endpoints from pods ", func() {
// repacking functionality is intentionally not tested here - it's better to test it in an integration test.
serviceName := "multi-endpoint-test"
ns := f.Namespace.Name
jig := e2eservice.NewTestJig(cs, ns, serviceName)
defer func() {
err := cs.CoreV1().Services(ns).Delete(context.TODO(), serviceName, metav1.DeleteOptions{})
framework.ExpectNoError(err, "failed to delete service: %s in namespace: %s", serviceName, ns)
}()
svc1port := "svc1"
svc2port := "svc2"
ginkgo.By("creating service " + serviceName + " in namespace " + ns)
_, err := jig.CreateTCPService(func(service *v1.Service) {
service.Spec.Ports = []v1.ServicePort{
{
Name: "portname1",
Port: 80,
TargetPort: intstr.FromString(svc1port),
},
{
Name: "portname2",
Port: 81,
TargetPort: intstr.FromString(svc2port),
},
}
})
framework.ExpectNoError(err)
port1 := 100
port2 := 101
validateEndpointsPortsOrFail(cs, ns, serviceName, portsByPodName{})
names := map[string]bool{}
defer func() {
for name := range names {
err := cs.CoreV1().Pods(ns).Delete(context.TODO(), name, metav1.DeleteOptions{})
framework.ExpectNoError(err, "failed to delete pod: %s in namespace: %s", name, ns)
}
}()
containerPorts1 := []v1.ContainerPort{
{
Name: svc1port,
ContainerPort: int32(port1),
},
}
containerPorts2 := []v1.ContainerPort{
{
Name: svc2port,
ContainerPort: int32(port2),
},
}
podname1 := "pod1"
podname2 := "pod2"
createPodOrFail(f, ns, podname1, jig.Labels, containerPorts1)
names[podname1] = true
validateEndpointsPortsOrFail(cs, ns, serviceName, portsByPodName{podname1: {port1}})
createPodOrFail(f, ns, podname2, jig.Labels, containerPorts2)
names[podname2] = true
validateEndpointsPortsOrFail(cs, ns, serviceName, portsByPodName{podname1: {port1}, podname2: {port2}})
e2epod.DeletePodOrFail(cs, ns, podname1)
delete(names, podname1)
validateEndpointsPortsOrFail(cs, ns, serviceName, portsByPodName{podname2: {port2}})
e2epod.DeletePodOrFail(cs, ns, podname2)
delete(names, podname2)
validateEndpointsPortsOrFail(cs, ns, serviceName, portsByPodName{})
})

E2e tests must verify the behaviour, i.e. that the Service and Endpoints actually work sending traffic.

The file is not well organized and there is a lot of duplicate code, one possibility is to use current Services helper has a function to test the reachability that iterates over all service ports

ginkgo.By("Checking if the pod can reach itself")
err = jig.CheckServiceReachability(svc, pod)
framework.ExpectNoError(err)

However, something that needs to be addressed is that the pods that are backing the service doesn't have any process listening on those ports

func createPodOrFail(f *framework.Framework, ns, name string, labels map[string]string, containerPorts []v1.ContainerPort) {

Metadata

Metadata

Assignees

Labels

priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.sig/networkCategorizes an issue or PR as relevant to SIG Network.triage/acceptedIndicates an issue or PR is ready to be actively worked on.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions