-
-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't seem to make it work with existing ingress in GKE #48
Comments
Hey! Did you configure the You can check if the ingress exists with |
I did not! I thought that meant it would create an ingress, but I already have the ingress? |
Did you create it independently? You seem to have configured the service, but not necessarily the ingress. You can try configuring it in the ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "<your_ingress_controller>"
hosts:
- <your.host.com> |
The ingress is preexisting and serves a bunch of other services. I'll try that. |
But, I'm not sure how to find out what the ingress.class should be? It's not nginx. It's the GKE...native ingress thing, I don't even know what it's called and googling isn't helping here (I didn't set this cluster up). I noticed some of my (well, not mine, my company's) other services have an annotation of |
OK, maybe it's As an aside, is there a way to reload configuration from values.yaml of the running pod without uninstall/installing it again? helm docs aren't helping me on that. |
Hmm, that seems to have created a new ingress, which is not what I need to happen. I guess I'm still not getting something fundamental here. |
That will depend on the tools you're using. Assuming it's Helm, something like |
Yes, it's helm. I don't want to reuse values, though? I want to apply the new ones? I'm trying to test different settings without having to tear down and build up again. |
The It doesn't seem to be an issue with Verdaccio, though. You don't necessarily need to create the ingress from the Helm chart, if you have an independent one, and it's properly configured. Can you discuss that with your cluster manager? |
Yeah, we're (me and the person who created the cluster/ingress and other services) currently both stumped. I definitely don't need an ingress to be created by helm. I just can't figure out how to make it talk to the existing one. I guess I need to download the whole chart and modify that targetPort to see if I can make it act like our existing services (they are not configured with helm, just kubernetes yaml files). I'm kind of out of ideas for what else could be preventing it from working when local port forwarding to 4873 does work. |
Just so I'm clear, the |
Yes, if you have an ingress already you may use it, if properly configured. In that case, keep the ingress from the chart disabled. |
Thank you! I'll keep banging on it. I don't think I'm significantly further ahead of where I was before, but I guess I have a next step I can try (changing port and targetPort in the templates). Gonna sleep some and hope it makes more sense in the morning. |
Hey y'all I've done a lot of work with the GKE ingress controllers so should be able to help untangle this for you. First @swelljoe when you say an ingress already exists do you mean an ingress controller or did you create an ingress resource for Verdaccio already. If so can you share it, feel free to remove anything sensitive of course. Regarding the service; any service you expose in GKE with an ingress should have the service annotation for For the service values. In general I would strongly recommend not specifying the stuff you aren't using and perhaps even consider leaving out anything that's the same as the default since they will be merged. Since the default is a ClusterIP service you really only need to specify the annotations. It's possible that some of the default empty values are triggering template paths they shouldn't which may be our bug. We really shouldn't spec them in the default either. service:
annotations:
loud.google.com/neg: '{"ingress": true}' In the event you don't have the ingress resource already this should get you what you want ingress:
enabled: true
annotations:
# if you want a VPC internal ingress, for external either remove the annotation or use "gce"
kubernetes.io/ingress.class: "gce-internal"
hosts:
- <your.host.com>
paths:
- "/*" # this is due to gce ingress needing a glob where nginx ingress doesn't This is all assuming of course we don't have any chart bugs which is quite possible as well of course |
Ah you may also need |
Thanks, @kav . I'm working with an existing ingress. I'd already gone back to having the My ingress looks like this (condensed and sanitized):
Which works for all of our other services, but not this one. The service shows OK, but the ingress treats it as unhealthy. One thing to note is all of our other services have |
Is this a typo in the ingress, or just in your comment? |
Er, just a typo in the comment. It was correct in the actual ingress. |
My ingress, created from the Helm chart, for AWS/EKS, looks something like this: apiVersion: extensions/v1beta1
kind: Ingress
spec:
rules:
- host: npm.domain.com
http:
paths:
- backend:
serviceName: verdaccio-verdaccio
servicePort: 4873
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- npm.domain.com
secretName: sec-npm
status:
loadBalancer:
ingress:
- hostname: domain.amazonaws.com |
Can I ask what your helm-generated Service looks like? I suspect that's where mine is falling down...my working services look like (stripped of non-ingress related stuff):
While the helm-generated one looks like this (
|
Could this be a namespace issue? Is your service in the same namespace as the ingress? My service looks pretty similar to yours. |
Yes, namespace is definitely correct. |
Can you share the output of |
svc (all the unrelated stuff removed):
ingress is complicated and not sensible looking enough to post (it's got a bazillion hosts, as it's providing ingress for a bunch of services), but I can say with 100% certainty that npmserver-verdaccio is among them. I notice it has |
Just for completeness, ingress cleaned up and sanitized:
|
Hum, looks similar to mine. You may want to further sanitize and remove the public IP address from your last comment. |
Can you replace |
By the way, does |
Yes. DNS is correct. And, yeah, that's my next thing to try (replacing targetPort and port in the templates) as that's the only thing I can see that differs from my other services that work. I don't know how to do that, still reading helm docs about how to install from a local chart directory or how to override one template file with a local one. (This is the first time I've ever used helm, still learning.) |
And, the health check is failing, which doesn't use DNS. It's definitely a problem in the ingress<->service somewhere. |
I think you may be able to achieve that with |
I'm not sure the healthcheck uses the ingress at all |
I didn't even think about editing it directly for testing! I've been fighting with this too long to think clearly. Weirdly the Service itself shows healthy. It's only the...I don't even know what to call it, in the ingress, that is showing unhealthy. |
If that doesn't work, can you share the metadata section (annotations, labels) of your service and ingress? Can you check the |
That fixed it! Thank you! So...it seems like it'd be useful to allow modifying targetPort in values.yaml? I can probably make a PR to do that (will take me a day or so to wrap my head around the template language and how all the pieces fit together, but I think I can make it go), if that'd be helpful. |
Glad to hear that! Hum, maybe we should check first why it's failing to translate a named port. Then if it's normal behaviour (if for some reason your type of deployment doesn't work with named ports), a PR would be very welcome :) |
I don't know enough to know what's normal. But, once I changed the service to be |
And if you change |
Would that be the same as changing the service.port in values.yaml? That did not work...but when I did that I couldn't even connect to verdaccio via a port-forward (which did work through all of this as long as port was 4873), so I think that maybe broke something else. |
Yes, try setting Since, according to what you shared here before, your ingress is configured with |
Do you happen to have any other deployments/pods in the namespace who are defining the port name of charts/charts/verdaccio/templates/deployment.yaml Lines 42 to 43 in a1aae18
|
I'm trying to get Verdaccio running in an existing cluster, which is working for other services e.g. a pypiserver with the following service yaml (this one works in my cluster, note ClusterIP is not defined in my local service yaml, it's being filled in when the service is created with kubectl apply):
And, when I look at the service page for this one, it shows port of 80 and target port of 8080.
But, Verdaccio installed with helm, while it can be reached if I setup a local port forward, isn't working with the ingress on the public IP. The verdaccio yaml in GKE looks like (this service does not work, despite looking, to me very similar to the above):
When I look at the service page for this one, it shows port of 4873 and target port of 0, so that feels like maybe a problem, somehow. I don't see any way to explicitly set targetPort (and it seems like that ought to be automagic since helm is setting up the service and knows where it runs better than I do, anyway). I think I'm misunderstanding something, but I can't find any clues for what after pretty extensive googling. I'm still pretty new to Kubernetes, though, so it may be obvious to someone else what I'm doing wrong.
My
values.yaml
contains:I've also tried explicitly setting the externalIPs and loadBalancerIP, but that didn't seem to work either, and those aren't specified for my other services that are working, AFAICS, so I don't think they should be needed here, either?
Anybody have a clue they can lend me for how this is supposed to be configured with a GKE Ingress?
The text was updated successfully, but these errors were encountered: