Multi-tenant Google authentication on Kubernetes
Deploying web services on Kubernetes (k8s) platform generally covers two use cases: things that your customers will use over the public internet, and systems designed for internal use only. Depending on how k8s is configured, such cases could translate into usage of private and public load balancers. Private resources are usually available only from within the network, so it's common to require VPN access for anyone that needs that kind of access. The story is quite different for any publicly available resources.
Let's pretend we're running an organization, with primary website being
and we would like to expose a few different services (apps, dashboards, etc) available via
different subdomains, like
app1.supercorp.com, and so on. But we also want these
apps only be available to the members of our organization, identified with their
corporate email. That's a classic use case for Google OAuth.
We could quickly thow some code together and add a Google OAuth flow into our app, luckily there are plenty of third-party packages for that, though it might not always be the case; you might simply lack engineering resources or understanding of the codebase. There should be an easy way to handle that, right? Oauth2-proxy for the rescue!
Oauth2-proxy is a reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. We essentially offloading the hard work of authenticating users to the proxy and leaving the rest of the application untouched. General deployment architecture might look like this:
[clients] --> [load balancer] --> [oauth2-proxy] --> [upstream application] | [google auth]
Oauth2-proxy accepts all incoming requests, runs validates against Google domain, and passes them onto the upstream backend (our app). However, oauth proxy acts as a networking bottleneck. Luckily, it supports a different mode of operation: validation only. In that scenario it would only validate the requests without doing any of the proxying, and perform authentication flow when necessary. We could also hook up multiple applications.
The network flow diagram would look something like this:
/ --> [app1] / | / | [clients] --> [load balancer] -----> [oauth2-proxy] --> [google auth] \ | \ | \ --> [app2]
app<1/2> runs over at
oauth2-proxy is available
auth.supercorp.com. Any request that hits
<app1/2> will be checked with
for validity (via a session cookie) and passed onto the application backend with little
network overhead (that depends on setup ™).
We're going to implement the flow described above and deploy
handling all our Google authentication needs in a k8s cluster. Same might work for
systems deployed across several clusters, but that's out of scope for this post.
- Google Developer Console access, to manage your
- Development k8s cluster,
- Manager access to modify DNS entries for
- K8s cluster MUST use nginx-ingress ingress controller, cloud provider does not matter.
NOTE: If you're not using
external-dns controller, ake sure to create a new DNS
auth.supercorp.com and point it at your ingress loadbalancer IP address(s).
Verify that it resolves correctly.
- Authentication endpoint will be at
- Application running at
https://app<1/2>.supercorp.com- we're assuming you already have these applications deployed and publicly available.
First, we need to create and configure a new Google OAuth application. Follow these steps
to get that sorted out. When you get to
Authorized redirect URI's section of setup, add
Next, we'll start creating k8s resources. Dump the resource definitions into separate files or into a single file, should not matter.
ConfigMap to hold a list of allowed email addresses to use with Google OAuth.
Note, this list is for testing purposes only as it limits the accounts that could
auth.supercorp.com endpoint. You should configure your Google OAuth
app to use
Internal domain type instead, to only allow users with
@supercorp.com emails in.
--- apiVersion: v1 kind: ConfigMap metadata: name: oauth2-proxy data: allowed.txt: | firstname.lastname@example.org email@example.com
Create a new Secret for holding credentials of the Google OAuth app:
apiVersion: v1 kind: Secret metadata: name: oauth2-proxy type: Opaque data: # # TIP: Use `echo -n VALUE | base64` to encode raw secret value. # OAUTH2_PROXY_CLIENT_ID: "REPLACE_ME" # base64-encoded oauth client ID OAUTH2_PROXY_CLIENT_SECRET: "REPLACE_ME" # base64-encoded oauth client secret OAUTH2_PROXY_COOKIE_SECRET: "REPLACE_ME" # base64-encoded cookie secret value
Next, create a new
Service, we'll be running
oauth2-proxy on port
--- apiVersion: v1 kind: Service metadata: name: oauth2-proxy spec: selector: selector: oauth2-proxy type: ClusterIP ports: - name: http port: 4180 protocol: TCP targetPort: 4180
--- apiVersion: apps/v1 kind: Deployment metadata: name: oauth2-proxy labels: app: oauth2-proxy spec: replicas: 1 selector: matchLabels: selector: oauth2-proxy template: metadata: labels: app: oauth2-proxy selector: oauth2-proxy spec: restartPolicy: Always terminationGracePeriodSeconds: 30 automountServiceAccountToken: false containers: - name: proxy image: quay.io/oauth2-proxy/oauth2-proxy:v7.4.0 imagePullPolicy: Always args: - --skip-provider-button=true - --provider=google - --authenticated-emails-file=/etc/oauth2-proxy-configs/allowed.txt - --cookie-domain=.supercorp.com - --whitelist-domain=*.supercorp.com - --cookie-expire=4h0m0s - --upstream=file:///dev/null - --http-address=0.0.0.0:4180 - --request-logging=false envFrom: - secretRef: name: oauth2-proxy volumeMounts: - name: oauth2 mountPath: /etc/oauth2-proxy-configs volumes: - name: oauth2 configMap: name: oauth2-proxy
A few notes on configuration flags:
||List of allowed emails, coming from a ConfigMap|
|| Allow any app on
||Allowed domains for redirection after authentication.|
||Disables request logging for non-authentication requests, remove if necessary.|
auth.supercorp.com service deployment:
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: oauth2-proxy annotations: # Automatically manage DNS for services with `external-dns` controller, if available. external-dns.alpha.kubernetes.io/hostname: auth.supercorp.com kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-body-size: "0" spec: rules: - host: "auth.supercorp.com" http: paths: - path: / pathType: ImplementationSpecific backend: service: name: oauth2-proxy port: number: 4180 tls: - secretName: oauth-proxy-ingress-tls hosts: - auth.supercorp.com
Certificate for the ingress, if using cert-manager controller.
--- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: oauth-proxy-cert spec: secretName: oauth-proxy-ingress-tls dnsNames: - auth.supercorp.com issuerRef: name: letsencrypt # <-- replace with your correct ClusterIssuer name kind: ClusterIssuer group: cert-manager.io
With all the k8s resource manifests in place, let's test the setup:
# Create a new namespace for our proxy setup kubectl create namespace oauth2-proxy # Apply resources kubectl apply -n oauth2-proxy -f . # or if saved all manifests in a directory: # kubectl apply -n oauth2-proxy -f dir/
You'll see an output like:
certificate.cert-manager.io/oauth-proxy2-cert created configmap/oauth2-proxy created deployment.apps/oauth2-proxy created ingress.networking.k8s.io/oauth2-proxy created secret/oauth2-proxy created service/oauth2-proxy created
Give it a minute or two, then head out to
auth.supercorp.com. If everything
is configured correctly, you should be presented with a Google authentication consent
screen. Use the email account you've allowed in the
ConfigMap manifest. The end
result after successful authentication should be a page displaying:
404 page not found
Good, it meants it's working. Now, we need to hook up a few dummy applications and
see if our Google auth is actually enforced. For demo purposes, i've prepared a
github repo with 2 example apps, one is
nginx and another one is
clone the repo:
git clone https://github.com/sosedoff/k8s-oauth2-proxy-example cd k8s-oauth2-proxy-example
Prepare namespaces (we use namespace per app):
kubectl create namespace app1 kubectl create namespace app2
Our apps will be available at
Make sure to adjust the URLs per your setup.
In order to enable Google OAuth, we provide a few annotations in the
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app1 annotations: # ... nginx.ingress.kubernetes.io/auth-url: "https://auth.supercorp.com/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://auth.supercorp.com/oauth2/start?rd=$scheme://$host$escaped_request_uri" # ...
That tells nginx ingress controller to validate incoming requests, and initiate the oauth flow if sesion is not set or has expired.
Go head and create the app resources:
kubectl -n app1 apply -f app1/ kubectl -n app2 apply -f app2/
We should have both apps up and running. Navigate to
if oauth url is correct, it should take you straight to Google login page. Again, use
one of the allowed emails to log in. On success, you should be taken back to the original
URL and presented with a default nginx welcome screen.
And that's pretty much it. There's a lot of moving parts, but it's pretty easy to configure the auth flow once you have that ironed out. As a note, you should be using separate oauth proxy deployments for public and internal auth flows.
Visit my Github Repo with oauth2-proxy manifests and example apps.
- Prefer installing k8s resources with Helm? No problem, use
oauth2-proxyhelm chart instead. https://artifacthub.io/packages/helm/oauth2-proxy/oauth2-proxy
- If your upstream apps need to handle OAuth account checking,
oauth2-proxyforwards a few HTTP headers, like
X-Forwarded-User(User ID), and
X-Forwarded-Email. Refer to the official docs for additional options.
- To disable Google account selection, use
- Use internal cluster oauth2 endpoint to speed up authentication calls on the ingress level, with annotation