Skip to content
This repository was archived by the owner on Sep 30, 2020. It is now read-only.

Option to automatically assign externally managed ELBs to the worker autoscaling group #93

Merged

Conversation

pieterlange
Copy link
Contributor

This is useful for operators that have special ELB requirements like the proxyprotocol or where loadbalancer/network asset management is delegated to a different team.

@codecov-io
Copy link

codecov-io commented Nov 24, 2016

Current coverage is 55.66% (diff: 100%)

Merging #93 into master will increase coverage by 0.12%

@@             master        #93   diff @@
==========================================
  Files             4          4          
  Lines          1057       1060     +3   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits            587        590     +3   
  Misses          388        388          
  Partials         82         82          

Powered by Codecov. Last update faadb70...b4b1eba

@mumoshu
Copy link
Contributor

mumoshu commented Nov 24, 2016

@pieterlange I believe this is what everyone(of course including me 😄) can benefit from when one wants to use ELB(s) to proxy or loadbalance requests to specific nodePorts, which might be a very common use-case 👍

IMHO, it is also a fairly common use-case that multiple services to be load-balanced are hosted on a single group of nodes hence it needs to be attached to multiple ELBs.
To support such case, would you mind making it accepts one or more ELB names via a YAML list, instead of just one name?

@pieterlange pieterlange force-pushed the feature/attach-worker-elb branch from ac07dfd to 59bc736 Compare November 24, 2016 14:02
@pieterlange
Copy link
Contributor Author

No problem..! Comin' up.

@pieterlange pieterlange force-pushed the feature/attach-worker-elb branch 2 times, most recently from c933d6b to f68ea8b Compare November 24, 2016 14:30
@pieterlange pieterlange changed the title Option to automatically assign an externally managed ELB to the worker autoscaling group Option to automatically assign externally managed ELBs to the worker autoscaling group Nov 24, 2016
@pieterlange
Copy link
Contributor Author

May i ask why you wouldn't simply use the type: LoadBalancer in the service for this usecase? :-) That automatically sets up an ELB.

This is really only for the 'odd' setups where you need to manually/externally configure the ELB with some settings that can't be configured from kubernetes itself.

@pieterlange
Copy link
Contributor Author

Also be aware that updating the loadbalancer names replaces the entire autoscaling group.

@pieterlange
Copy link
Contributor Author

pieterlange commented Nov 24, 2016

To use this feature correctly the operator must currently create a manual "glue" securitygroup with inbound rules allowing for every ELB port routed to a nodePort.

It might be more intuitive to actually reference the ELB securitygroup and simply add a new SecurityGroupIngress object allowing blanket access from said ELB to all nodePort (30000 - 32767). At that point it's up to the ELB administrator to route ports from the ELB to the appropriate nodePort

@mumoshu
Copy link
Contributor

mumoshu commented Nov 25, 2016

@pieterlange Thanks for your effort!

My use-case is #93 (comment) plus a downtime-less kubernetes cluster replacement with less scripting/manual work. I've never considered using type: LoadBalancer for exposing/load-balancing my services because of those two reqs.

I'd like ELBs to survive between multiple recreations of associated kubernetes clusters so that there's no need of manual work/additional scripting like:

  • run kube-aws up to bring up the new cluster
  • deploy your app and then wait until up
  • then modify Route53 recordsets to start routing requests to ELBs created by type: LoadBalancer services in the new kubernetes cluster,
  • then run kube-aws destroy the old cluster

instead we can now do:

  • run kube-aws up to bring up the new cluster and attach all the preconfigured ELBs to its workers
  • deploy your app and then wait until up
  • then run kube-aws destroy the old cluster

I even assume this is not very odd but occasionally needed setup 😄

@mumoshu
Copy link
Contributor

mumoshu commented Nov 25, 2016

@pieterlange Would you mind rebasing this?
It is conflicting too often these days 😭

@pieterlange
Copy link
Contributor Author

pieterlange commented Nov 25, 2016

Rebased and alphabetically sorted the Experimental struct while i'm at it to help with conflicts

@pieterlange pieterlange force-pushed the feature/attach-worker-elb branch from 8690b52 to b4b1eba Compare November 25, 2016 07:11
@mumoshu
Copy link
Contributor

mumoshu commented Nov 25, 2016

Thanks for rebasing and the good idea 👍

@mumoshu mumoshu merged commit 347f9de into kubernetes-retired:master Nov 25, 2016
@mumoshu mumoshu added this to the v0.9.2-rc.1 milestone Nov 25, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants