-
Notifications
You must be signed in to change notification settings - Fork 333
New ingress controller chart #3140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
a4020bc
to
0340b10
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job 👍
8b6db95
to
db3e342
Compare
CI tests now finally pass; both on a 1.19 cluster On 1.26 kube-ci using the new chart wrapper:
Following this PR, as well as #3154; I think we could also test using only TLS1.3 and test using ciphers with bigger key sizes. |
The federator client doesn't support TLS 1.3 yet. It requires implementing a c function in HsOpenSSL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome 👍 LGTM
Co-authored-by: Sebastian Willenborg <[email protected]>
# -- Use a `DaemonSet` or `Deployment` | ||
kind: DaemonSet | ||
service: | ||
type: NodePort # or LoadBalancer | ||
externalTrafficPolicy: Local | ||
nodePorts: | ||
# The nginx instance is exposed on ports 31773 (https) and 31772 (http) | ||
# on the node on which it runs. You should add a port-forwarding rule | ||
# on the node or on the loadbalancer that forwards ports 443 and 80 to | ||
# these respective ports. | ||
https: 31773 | ||
http: 31772 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A more opportunistic default here would be to use type: Deployment
and type: LoadBalancer
, and document how to configure this if you have to resort to NodePort.
The NodePort approach always requires manual configuration of some external load balancer/firewall to round-robin between node IPs and is error-prone. It's also a bit annoying to have to decide on some global ports that may not be used otherwise.
Most managed K8s clusters have support for LoadBalancers, you can also get this for your own clusters in hcloud etc. It's even possible to do it for pure bare metal, without any "load balancer hardware", by using BGP or some leadership election over who's announcing the "load balancer ip" via ARP (https://metallb.universe.tf/configuration/_advanced_l2_configuration/).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I swapped the defaults as you suggested and documented how to get the previous behaviour back.
## Main change: New 'ingress-nginx-controller' wrapper chart compatible with kubernetes versions [1.23 - 1.26]. The old one 'nginx-ingress-controller' (compatible only up to k8s 1.19) is now DEPRECATED. We advise to upgrade your version of kubernetes in use to 1.23 or higher (we tested on kubernetes version 1.26), and to make use of the new ingress controller chart. Main features: - up-to-date nginx version ('1.21.6') - TLS 1.3 support (including allowing specifying which cipher suites to use) - security fixes - no more accidental logging of Wire access tokens under specific circumstances [jira ticket]( https://wearezeta.atlassian.net/browse/SEC-47) The 'kind: Ingress' resources installed via 'nginx-ingress-services' chart remain compatible with both the old and the new ingress controller, and k8s versions [1.18 - 1.26]. In case you upgrade an existing kubernetes cluster (not recommended), you may need to first uninstall the old controller before installing the new controller chart. In case you have custom overrides, you need to modify the directory name and top-level configuration key: ```diff # If you have overrides for the controller chart (such as cipher suites), ensure to rename file and top-level key: -# nginx-ingress-controller/values.yaml +# ingress-nginx-controller/values.yaml -nginx-ingress: +ingress-nginx: controller: # ... ``` and double-check if all overrides you use are indeed provided under the same name by the upstream chart. See also the default overrides in [the default values.yaml](https://github.com/wireapp/wire-server/blob/develop/charts/ingress-nginx-controller/values.yaml). In case you use helmfile change your ingress controller like this: ```diff # helmfile.yaml releases: - - name: 'nginx-ingress-controller' + - name: 'ingress-nginx-controller' namespace: 'wire' - chart: 'wire/nginx-ingress-controller' + chart: 'wire/ingress-nginx-controller' version: 'CHANGE_ME' ``` For more information read the documentation under https://docs.wire.com/how-to/install/ingress.html (or go to https://docs.wire.com and search for "ingress-nginx-controller") ## Other internal changes: - integration tests on CI will use either the old or the new ingress controller; depending on which kubernetes version they run on. - upgrade `kubectl` to default from the nixpkgs channel (currently `1.26`) by removing the manual version pin on 1.19 - upgrade `helmfile` to default from the nixpkgs channel by removing the manual version pin - upgrade `helm` to default from the nixpkgs channel by removing the manual version pin - add `kubelogin-oidc` so the kubectl in this environment can also talk to kubernetes clusters using OIDC
You should add a port-forwarding rule on the node or on the loadbalancer that | ||
forwards ports 443 and 80 to these respective ports. Any traffic hitting the http port is simply getting a http 30x redirect to https. | ||
|
||
Downsides of this approach: The NodePort approach always requires manual configuration of some external load balancer/firewall to round-robin between node IPs and is error-prone. It's also a bit annoying to have to decide on some global ports that may not be used otherwise. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now do set some ports for the NodePort approach (31773 and 31772), even though we don't use them as long as the user doesn't explicitly set type: NodePort
. We might want to remove them from values.yaml
, and explicitly let the user decide (and put it in the example in L63), so this paragraph makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. #3173
|
||
Downsides of this approach: The NodePort approach always requires manual configuration of some external load balancer/firewall to round-robin between node IPs and is error-prone. It's also a bit annoying to have to decide on some global ports that may not be used otherwise. | ||
|
||
Most managed K8s clusters have support for LoadBalancers, you can also get this for your own clusters in hcloud etc. It's even possible to do it for pure bare metal, without any "load balancer hardware", by using BGP or some leadership election over who's announcing the "load balancer ip" via ARP (https://metallb.universe.tf/configuration/_advanced_l2_configuration/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is okay for a github discussion, but a bit too sloppy for public docs.
What about
Most managed K8s clusters have support for LoadBalancers.
Manually set up K8S Clusters can also support this, by using a provider/environment-specific CCM
(see hcloud and digitalocean for examples).
In case you're provisioning on bare metal, without any hardware load balancer support in front,
you might be using MetalLB, which supports BGP or Failover L2 ARP announcements.
The choice of CCM highly depends on the environment you choose to deploy wire-server in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
… too (#3138) Fix a rate-limit exemption whereby authenticated endpoints did not get the unlimited_requests_endpoint, if set, applied. This is a concern for the webapp and calls to /assets, which can happen in larger numbers on initial loading. A previous change in [this PR](#2786) had no effect. This PR also increases default rate limits, to compensate for [new ingress controller chart](#3140 default topologyAwareRouting.
Main change:
New 'ingress-nginx-controller' wrapper chart compatible with kubernetes versions [1.23 - 1.26]. The old one 'nginx-ingress-controller' (compatible only up to k8s 1.19) is now DEPRECATED.
We advise to upgrade your version of kubernetes in use to 1.23 or higher (we tested on kubernetes version 1.26), and to make use of the new ingress controller chart. Main features:
The 'kind: Ingress' resources installed via 'nginx-ingress-services' chart remain compatible with both the old and the new ingress controller, and k8s versions [1.18 - 1.26]. In case you upgrade an existing kubernetes cluster (not recommended), you may need to first uninstall the old controller before installing the new controller chart.
In case you have custom overrides, you need to modify the directory name and top-level configuration key:
and double-check if all overrides you use are indeed provided under the same name by the upstream chart. See also the default overrides in the default values.yaml.
In case you use helmfile change your ingress controller like this:
For more information read the documentation under https://docs.wire.com/how-to/install/ingress.html (or go to https://docs.wire.com and search for "ingress-nginx-controller")
Other internal changes:
kubectl
to default from the nixpkgs channel (currently1.26
) by removing the manual version pin on 1.19helmfile
to default from the nixpkgs channel by removing the manual version pinhelm
to default from the nixpkgs channel by removing the manual version pinkubelogin-oidc
so the kubectl in this environment can also talk to kubernetes clusters using OIDCChecklist
changelog.d