Skip to content

Fix 1634 #1635

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 158 commits into from
Apr 12, 2024
Merged

Fix 1634 #1635

merged 158 commits into from
Apr 12, 2024

Conversation

wind57
Copy link
Contributor

@wind57 wind57 commented Apr 12, 2024

No description provided.

wind57 and others added 30 commits December 4, 2021 07:59
wind57 added 17 commits March 28, 2024 21:31
spec:
containers:
- name: istio-ctl
image: istio/istioctl:1.20.1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the idea is to remove the istioctl from the source control, as the bug describes. For that:

use a deployment with istioctl, copy it's istioctl binary to host and do exactly what we used to do until now

istioctlManifests(Phase.CREATE);

String istioctlPodName = istioctlPodName();
K3S.execInContainer("sh", "-c",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copy istioctl from the pod to the host

@@ -0,0 +1,21 @@
apiVersion: apps/v1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the idea here is to delete the istioctl binary that we have in our repo and get it from an image from istio. Such an image is the one here

@wind57 wind57 marked this pull request as ready for review April 12, 2024 15:08
@wind57
Copy link
Contributor Author

wind57 commented Apr 12, 2024

@ryanjbaxter enhancement that was raised yesterday by @codefromthecrypt

Copy link
Contributor

@codefromthecrypt codefromthecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! I still can't test this with colima per the other matter, but looks like good progress!

spec:
containers:
- name: istio-ctl
image: istio/istioctl:1.20.5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added #1637 for follow-up as unlike the other images, this is using a different istio version and also isn't pulled locally first (so it will be pulled from within k3s)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be honest, we are not there yet. thx to your issue, I've realized that we are not using any github cache for these images (among some other that we use in integration tests), so I need to fix that one too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

be careful caching, as in zipkin we had a security glitch that caused a lot of headache. We now have this in all our workflows.

      # Don't attempt to cache Docker. Sensitive information can be stolen
      # via forks, and login session ends up in ~/.docker. This is ok because
      # we publish DOCKER_PARENT_IMAGE to ghcr.io, hence local to the runner.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Between first time we met and now, I understand why we are loading to k3s.

Basically, this:

  • prevents double-downloading of well-known images (like istio), as they can use the copy from local docker
  • allows you to use the image you just built which isn't yet published

So, theses are still helpful regardless of github caching

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for the heads-up! and thank you a lot for your issue with this one. From what I remember the change to using k3s was the biggest PR we had, with months of work, so things might have slipped here and there

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been using k3s in go, and it also took me a very long time to understand all the nuance. I believe it must have been a lot more work before, as few if any must have been using it in testcontainers when you did this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support arm64 when running integration tests
4 participants