Skip to content

[occm]: support providers that do not require nodeports#3071

Open
oblazek wants to merge 2 commits intokubernetes:masterfrom
oblazek:ob-nodeport
Open

[occm]: support providers that do not require nodeports#3071
oblazek wants to merge 2 commits intokubernetes:masterfrom
oblazek:ob-nodeport

Conversation

@oblazek
Copy link
Copy Markdown

@oblazek oblazek commented Feb 4, 2026

Add a new loadBalancer option ProviderRequiresNodeports which by default is true to keep the existing behavior but when set to false allows user to not have to allocate nodeports in case there is support by the loadbalancer provider.

This allows usage of services of type LoadBalancer with allocateLoadBalancerNodePorts=false in case provider like ours supports that. In our case we send traffic to our k8s nodes (in openstack) using ipip tunneling so there is no need to allocate nodeports.

occm: support providers that do not need nodeports

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Feb 4, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign dulek for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Welcome @oblazek!

It looks like this is your first PR to kubernetes/cloud-provider-openstack 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/cloud-provider-openstack has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 4, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Hi @oblazek. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 4, 2026
@oblazek
Copy link
Copy Markdown
Author

oblazek commented Feb 4, 2026

added/fixed tests so that the defaults are overwritten when needed

@oblazek
Copy link
Copy Markdown
Author

oblazek commented Feb 4, 2026

/test

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@oblazek: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

Details

In response to this:

/test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Add a new loadBalancer option `ProviderRequiresNodeports` which
by default is true to keep the existing behavior but when set to false
allows user to not have to allocate nodeports in case there is support
by the loadbalancer provider.

Signed-off-by: Ondrej Blazek <ondrej.blazek@firma.seznam.cz>
Signed-off-by: Ondrej Blazek <ondrej.blazek@firma.seznam.cz>
newMembers := sets.New[string]()

for _, node := range nodes {
addr, err := nodeAddressForLB(node, svcConf.preferredIPFamily)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't you also need to use the pod addr instead of the node one in case no nodeport is required?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well usually not, as OCCM does not watch endpoints. That's the purpose of CNI (like cilium) imo. In our case we do ipip tunneling from our externalLB (the one that receives configuration from OCCM in the end) to k8s nodes where cilium listens.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so in our case you have: extLB -> node1IP: clientIP -> svcIP (outer IP header: inner IP header)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so that's why there is no need for nodeport

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, we have a slightly different use-case. we use native routing with cilium, so the pod network is directly reachable from external LB, therefore no additional ( nodePort ) hop would be required, so the LB member address could be the pod IP.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, that's a lot of updates your externalLB needs to process. Anyway regarding the pod CIDR, how would you pass that to the OCCM? AFAICS it does not watch pods/endpoints.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not necessarily a lot more updates if you use externalTrafficPolicy: local, which means the traffic is only sent to the nodes where the upstream pods are running. so the amount of updates would be higher only if the new pods are being rescheduled on the same nodes ( so no update of the LB required with nodePort) , otherwise it would be the same amount....
I'm not sure about the best way to have the podIPs in the OCCM, would it feasible to watch the Endpoints/EndpointSlices?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that totally would, we have a similar controller that does that (but that's meant for other LB - not the external one). But I suppose OCCM maintainers don't want to do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants