Skip to content

Fix for bugz 1564984 - rejected by router when using router sharding and NAMESPACE_LABELS #19330

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 20, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions pkg/router/controller/host_admitter.go
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,10 @@ type HostAdmitter struct {
// ownership (of subdomains) to a single owner/namespace.
disableNamespaceCheck bool

// allowedNamespaces is the set of allowed namespaces.
// Note that nil (aka allow all) has a different meaning than empty set.
allowedNamespaces sets.String

claimedHosts RouteMap
claimedWildcards RouteMap
blockedWildcards RouteMap
Expand Down Expand Up @@ -122,6 +126,12 @@ func (p *HostAdmitter) HandleEndpoints(eventType watch.EventType, endpoints *kap

// HandleRoute processes watch events on the Route resource.
func (p *HostAdmitter) HandleRoute(eventType watch.EventType, route *routeapi.Route) error {
if p.allowedNamespaces != nil && !p.allowedNamespaces.Has(route.Namespace) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could push this namespace label filtering at the higher level (router controller), that way every router plugin (host-admitter, unique-host, etc.) don't need to handle this case.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pravisankar that's a good idea. Though I think in an ideal world, this wouldn't need to be in every plugin. We do need a refactor to consolidate work that's being done in both the host_admitter (one does wildcard specific stuff) and unique_host (that does host uniqueness check which overlaps with the host_admitter). There's duplication in there and needs a review before we can get collapse this into a single plugin. So in a follow up PR past this release?

Copy link

@pravisankar pravisankar Apr 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with the follow up pr.
I wasn't even thinking about consolidating host uniqueness and host admitter. My idea was to filter events (ns, ep, routes, etc.) at the source, which is router controller and then propagate filtered events to the chain of the router plugins (unique host, admitter, status, validator).
Currently, some of the router plugins are doing its own filtering like unique_host plugin. This raises one more concern. What will happen to this below scenario:

  • router1 handles all namespaces matching labelset ls1
  • router2 handles all namespaces matching labelset ls2
  • There could be overlap between labelset ls1 and ls2.

unique_host plugin filters routes based on its namespace filter. For the same route, router1 may reject the route but router2 may accept the route. And if we look at all the routes in the cluster, host uniqueness may be broken. Don't we want unique_host plugin to look at all the routes to determine host uniqueness in the cluster even when namespace label filter is present?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree with moving the filtering out (up in the chain).

If I understood your question correctly, router{1,2} are 2 different/separate router environments that co-exist on the same cluster. The unique_host plugin is going to handle the namespaces (via HandleNamespace) which a particular router is filtering on. It is not filtering on its own set, it is filtering on whatever the router is filtering on.
Meaning that the unique_host plugin would just get those set of namespaces (matching ls1 or ls2 for the router its running inside off) and would admit routes based on those exact same namespaces.

As re: doing the uniqueness checks cluster wide - across multiple different routers or across all routes, I don't think that's a good thing or that we can do that for a few different reasons:

  • A router might not have access to all the routes (aka it could be namespace scoped).
  • A sharded router environment could service a subset of the routes in each of the router shards. You may do this for performance/distribution reasons on a high-occupancy cluster. The uniqueness check becomes more expensive plus it could potentially have overlaps which you may want. Again this could be all namespace scoped or even a subset of namespaces scoped.
  • To go further down the rabbit hole ;^) on the above point, you could have same routes or even different routes with the same host name pointing to different services for SLA reasons (high/medium/low) and a front-end load balancer could select the different shards based on its own SLAs etc.
  • Makes it difficult to deploy multiple environments in the same cluster.
    Example: Different namespaces on the same cluster could represent the different environments (ala staging/qe/multiple-devs etc). And all these environments/namespaces have their own router running and other objects (routes, services, etc). The host name/route is then specific to that environment, and you don't want to enforce a cluster-wide check for uniqueness.
  • You have 2 routes that point to different services (say version1 and version2 of a service) and you want to basically use label filters to bring the version2 of the service online without requiring a downtime/new deployment - just set everything up for version2 and then change the labels on the route that points to version1.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ramr Thank you for your detailed explanation. Now I understood the scope of the unique host plugin.

// Ignore routes we don't need to "service" due to namespace
// restrictions (ala for sharding).
return nil
}

if err := p.admitter(route); err != nil {
glog.V(4).Infof("Route %s not admitted: %s", routeNameKey(route), err.Error())
p.recorder.RecordRouteRejection(route, "RouteNotAdmitted", err.Error())
Expand Down Expand Up @@ -151,6 +161,7 @@ func (p *HostAdmitter) HandleRoute(eventType watch.EventType, route *routeapi.Ro
// HandleNamespaces limits the scope of valid routes to only those that match
// the provided namespace list.
func (p *HostAdmitter) HandleNamespaces(namespaces sets.String) error {
p.allowedNamespaces = namespaces
return p.plugin.HandleNamespaces(namespaces)
}

Expand Down
122 changes: 122 additions & 0 deletions pkg/router/controller/host_admitter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package controller
import (
"fmt"
"math/rand"
"reflect"
"strings"
"testing"
"time"
Expand Down Expand Up @@ -1004,3 +1005,124 @@ func TestDisableOwnershipChecksFuzzing(t *testing.T) {
t.Errorf("Unexpected errors:\n%s", strings.Join(errors.List(), "\n"))
}
}

func TestHandleNamespaceProcessing(t *testing.T) {
p := &fakePlugin{}
recorder := rejectionRecorder{rejections: make(map[string]string)}
admitter := NewHostAdmitter(p, wildcardAdmitter, true, false, recorder)

// Set namespaces handled in the host admitter plugin, the fakePlugin in
// the test chain doesn't support this, so ignore not expected error.
err := admitter.HandleNamespaces(sets.NewString("ns1", "ns2", "nsx"))
if err != nil && err.Error() != "not expected" {
t.Fatalf("unexpected error: %v", err)
}

tests := []struct {
name string
namespace string
host string
policy routeapi.WildcardPolicyType
expected bool
}{
{
name: "expected",
namespace: "ns1",
host: "update.expected.test",
policy: routeapi.WildcardPolicyNone,
expected: true,
},
{
name: "not-expected",
namespace: "updatemenot",
host: "no-update.expected.test",
policy: routeapi.WildcardPolicyNone,
expected: false,
},
{
name: "expected-wild",
namespace: "ns1",
host: "update.wild.expected.test",
policy: routeapi.WildcardPolicySubdomain,
expected: true,
},
{
name: "not-expected-wild-not-owner",
namespace: "nsx",
host: "second.wild.expected.test",
policy: routeapi.WildcardPolicySubdomain,
expected: false,
},
{
name: "not-expected-wild",
namespace: "otherns",
host: "noupdate.wild.expected.test",
policy: routeapi.WildcardPolicySubdomain,
expected: false,
},
{
name: "expected-wild-other-subdomain",
namespace: "nsx",
host: "host.third.wild.expected.test",
policy: routeapi.WildcardPolicySubdomain,
expected: true,
},
{
name: "not-expected-plain-2",
namespace: "notallowed",
host: "not.allowed.expected.test",
policy: routeapi.WildcardPolicyNone,
expected: false,
},
{
name: "not-expected-blocked",
namespace: "nsx",
host: "blitz.domain.blocked.test",
policy: routeapi.WildcardPolicyNone,
expected: false,
},
{
name: "not-expected-blocked-wildcard",
namespace: "ns2",
host: "wild.blocked.domain.blocked.test",
policy: routeapi.WildcardPolicySubdomain,
expected: false,
},
}

for _, tc := range tests {
route := &routeapi.Route{
ObjectMeta: metav1.ObjectMeta{
Name: tc.name,
Namespace: tc.namespace,
UID: types.UID(tc.name),
},
Spec: routeapi.RouteSpec{
Host: tc.host,
WildcardPolicy: tc.policy,
},
Status: routeapi.RouteStatus{
Ingress: []routeapi.RouteIngress{
{
Host: tc.host,
RouterName: "nsproc",
Conditions: []routeapi.RouteIngressCondition{},
WildcardPolicy: tc.policy,
},
},
},
}

err := admitter.HandleRoute(watch.Added, route)
if tc.expected {
if err != nil {
t.Fatalf("test case %s unexpected error: %v", tc.name, err)
}
if !reflect.DeepEqual(p.route, route) {
t.Fatalf("test case %s expected route to be processed: %+v", tc.name, route)
}
} else if err == nil && reflect.DeepEqual(p.route, route) {
t.Fatalf("test case %s did not expected route to be processed: %+v", tc.name, route)
}
}
}