Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

ChrsMark
Copy link
Member

@ChrsMark ChrsMark commented Apr 6, 2021

What does this PR do?

This PR adds tests for k8s package using k8s service deployer.

Related to #822

ChrsMark added 4 commits April 6, 2021 09:22
Signed-off-by: ChrsMark <[email protected]>
Signed-off-by: ChrsMark <[email protected]>
Signed-off-by: ChrsMark <[email protected]>
@elasticmachine
Copy link

elasticmachine commented Apr 6, 2021

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: Pull request #886 updated

  • Start Time: 2021-04-22T12:34:17.314+0000

  • Duration: 32 min 51 sec

  • Commit: 99c86ed

Test stats 🧪

Test Results
Failed 0
Passed 359
Skipped 0
Total 359

Trends 🧪

Image of Build Times

Image of Tests

@ChrsMark ChrsMark force-pushed the add_k8s_integration_tests branch from 5752d32 to 375177f Compare April 6, 2021 14:16
@ChrsMark
Copy link
Member Author

ChrsMark commented Apr 6, 2021

@mtojek @ycombinator I think that this scenario might be a good candidate to put flakiness because we don't wait for custom resources to be ready so we might fail to get metrics from the endpoints since the service is not yet ready. I think this happens here with kube-state-metrics.

I wonder if we could add sth like "wait for healthy" on k8s service deployer to ensure that custom resource is ready.

@ycombinator
Copy link
Contributor

I wonder if we could add sth like "wait for healthy" on k8s service deployer to ensure that custom resource is ready.

Sounds good to me. We have a waitUntilTrue function that you might be able to reuse for this.

@mtojek
Copy link
Contributor

mtojek commented Apr 6, 2021

Sounds good to me. We have a waitUntilTrue function that you might be able to reuse for this.

I think this one runs on a different level. We need something on the service's Setup() level.

@mtojek
Copy link
Contributor

mtojek commented Apr 7, 2021

One more question, @ChrsMark .

Do you have any proposal for the wait for healthy? Would you like us to use any particular API/method? I'm going to work on this soon/today.

@ChrsMark
Copy link
Member Author

ChrsMark commented Apr 7, 2021

Hey! Since we have to do with metrics collection and mostly http maybe using liveness checks can work here. In this way when you add a custom resource you are responsible for defining the liveness probe and then the deployer will check the status of the pod and wait until ready. Feel free to ping me when on it to further brainstorm on this.

@mtojek
Copy link
Contributor

mtojek commented Apr 7, 2021

Feel free to ping me when on it to further brainstorm on this.

What if custom definitions describe multiple pods and some of them hasn't been created yet (we can see that there is one pod in progress)?

@ChrsMark ChrsMark force-pushed the add_k8s_integration_tests branch from 8810952 to bf7a8e2 Compare April 19, 2021 10:52
@mtojek
Copy link
Contributor

mtojek commented Apr 19, 2021

FYI I pushed the update of elastic-package to the master branch.

@ChrsMark
Copy link
Member Author

Blocked on elastic/elastic-package#332

@ChrsMark ChrsMark requested a review from mtojek April 21, 2021 14:00
- name: pod.uid
type: keyword
description: >
Kubernetes Pod UID
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: pod vs Pod (casing consistency)

- name: deployment.name
type: keyword
description: >
Kubernetes deployment name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: not sure if we need the prefix "Kubernetes" in every field description

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is how we have it in Beats too, I don't think this is a bad pattern.

- http://{{Hostname}}:10251
- https://0.0.0.0:10257
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: why exactly is the ssl.verification_mode = none? is it because we're lacking some permissions or certs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use the token for the auth part but since the server is https we need a CA or --insecure( ssl.verification_mode: "none") flag to access it. We follow this pattern for kubelet's api too and most probably works fine in real world clusters. If users want to use CAs they can too but they need to provide it properly. (we have some similar cases with kubelet's api access in openshift and we had to find the proper CA generated by k8s so as to use it with Beats too). For this testing scenario I think we are safe.

@@ -0,0 +1,108 @@
apiVersion: rbac.authorization.k8s.io/v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you consider using a package scoped _dev/deploy that can be common for all data streams?

name: kubernetes
title: Kubernetes
version: 0.4.5
version: 0.5.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind updating the changelog as you added new field definitions?

@ChrsMark ChrsMark force-pushed the add_k8s_integration_tests branch from ba06b38 to a91cae2 Compare April 22, 2021 10:44
@mtojek mtojek self-requested a review April 22, 2021 13:13
Copy link
Contributor

@mtojek mtojek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ship it, please!

@ChrsMark ChrsMark merged commit fd944b8 into elastic:master Apr 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants