Multi-tenant Google authentication on Kubernetes

Deploying web services on Kubernetes (k8s) platform generally covers two use cases: things that your customers will use over the public internet, and systems designed for internal use only. Depending on how k8s is configured, such cases could translate into usage of private and public load balancers. Private resources are usually available only from within the network, so it's common to require VPN access for anyone that needs that kind of access. The story is quite different for any publicly available resources.

Let's pretend we're running an organization, with primary website being, and we would like to expose a few different services (apps, dashboards, etc) available via different subdomains, like, and so on. But we also want these apps only be available to the members of our organization, identified with their corporate email. That's a classic use case for Google OAuth.

We could quickly thow some code together and add a Google OAuth flow into our app, luckily there are plenty of third-party packages for that, though it might not always be the case; you might simply lack engineering resources or understanding of the codebase. There should be an easy way to handle that, right? Oauth2-proxy for the rescue!

OAuth Proxy

Oauth2-proxy is a reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group. We essentially offloading the hard work of authenticating users to the proxy and leaving the rest of the application untouched. General deployment architecture might look like this:

[clients] --> [load balancer] --> [oauth2-proxy] --> [upstream application]
                                   [google auth]

Oauth2-proxy accepts all incoming requests, runs validates against Google domain, and passes them onto the upstream backend (our app). However, oauth proxy acts as a networking bottleneck. Luckily, it supports a different mode of operation: validation only. In that scenario it would only validate the requests without doing any of the proxying, and perform authentication flow when necessary. We could also hook up multiple applications.

The network flow diagram would look something like this:

                                / --> [app1]
                               /        |
                              /         |
[clients] --> [load balancer] -----> [oauth2-proxy] --> [google auth]
                              \         |
                               \        |
                                \ --> [app2]

Where app<1/2> runs over at app<1/2>, and oauth2-proxy is available as Any request that hits <app1/2> will be checked with oauth2-proxy for validity (via a session cookie) and passed onto the application backend with little network overhead (that depends on setup ™).

Kubernetes Setup

We're going to implement the flow described above and deploy oauth2-proxy for handling all our Google authentication needs in a k8s cluster. Same might work for systems deployed across several clusters, but that's out of scope for this post.


  1. Google Developer Console access, to manage your domain.
  2. Development k8s cluster, kubectl installed locally.
  3. Manager access to modify DNS entries for domain.
  4. K8s cluster MUST use nginx-ingress ingress controller, cloud provider does not matter.

NOTE: If you're not using external-dns controller, ake sure to create a new DNS record and point it at your ingress loadbalancer IP address(s). Verify that it resolves correctly.


  • Authentication endpoint will be at
  • Application running at https://app<1/2> - we're assuming you already have these applications deployed and publicly available.

First, we need to create and configure a new Google OAuth application. Follow these steps to get that sorted out. When you get to Authorized redirect URI's section of setup, add URL.

Next, we'll start creating k8s resources. Dump the resource definitions into separate files or into a single file, should not matter.


Create ConfigMap to hold a list of allowed email addresses to use with Google OAuth. Note, this list is for testing purposes only as it limits the accounts that could authenticate with endpoint. You should configure your Google OAuth app to use Internal domain type instead, to only allow users with emails in.

apiVersion: v1
kind: ConfigMap
  name: oauth2-proxy
  allowed.txt: |

Create a new Secret for holding credentials of the Google OAuth app:

apiVersion: v1
kind: Secret
  name: oauth2-proxy
type: Opaque
  # TIP: Use `echo -n VALUE | base64` to encode raw secret value.
  OAUTH2_PROXY_CLIENT_ID: "REPLACE_ME" # base64-encoded oauth client ID
  OAUTH2_PROXY_CLIENT_SECRET: "REPLACE_ME" # base64-encoded oauth client secret
  OAUTH2_PROXY_COOKIE_SECRET: "REPLACE_ME" # base64-encoded cookie secret value

Next, create a new Service, we'll be running oauth2-proxy on port 4180:

apiVersion: v1
kind: Service
  name: oauth2-proxy
    selector: oauth2-proxy
  type: ClusterIP
    - name: http
      port: 4180
      protocol: TCP
      targetPort: 4180


apiVersion: apps/v1
kind: Deployment
  name: oauth2-proxy
    app: oauth2-proxy
  replicas: 1
      selector: oauth2-proxy
        app: oauth2-proxy
        selector: oauth2-proxy
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      automountServiceAccountToken: false
        - name: proxy
          imagePullPolicy: Always
            - --skip-provider-button=true
            - --provider=google
            - --authenticated-emails-file=/etc/oauth2-proxy-configs/allowed.txt
            - --whitelist-domain=*
            - --cookie-expire=4h0m0s
            - --upstream=file:///dev/null
            - --http-address=
            - --request-logging=false
            - secretRef:
                name: oauth2-proxy
            - name: oauth2
              mountPath: /etc/oauth2-proxy-configs
        - name: oauth2
            name: oauth2-proxy

A few notes on configuration flags:

Option Description
--authenticated-emails-file List of allowed emails, coming from a ConfigMap Allow any app on domain to use shared auth service.
--whitelist-domain=* Allowed domains for redirection after authentication.
--request-logging=false Disables request logging for non-authentication requests, remove if necessary.

Finally, the Ingress for service deployment:

kind: Ingress
  name: oauth2-proxy
    # Automatically manage DNS for services with `external-dns` controller, if available. nginx "0"
    - host: ""
          - path: /
            pathType: ImplementationSpecific
                name: oauth2-proxy
                  number: 4180
    - secretName: oauth-proxy-ingress-tls

Certificate for the ingress, if using cert-manager controller.

kind: Certificate
  name: oauth-proxy-cert
  secretName: oauth-proxy-ingress-tls
    name: letsencrypt # <-- replace with your correct ClusterIssuer name
    kind: ClusterIssuer


With all the k8s resource manifests in place, let's test the setup:

# Create a new namespace for our proxy setup
kubectl create namespace oauth2-proxy

# Apply resources
kubectl apply -n oauth2-proxy -f .

# or if saved all manifests in a directory:
# kubectl apply -n oauth2-proxy -f dir/

You'll see an output like: created
configmap/oauth2-proxy created
deployment.apps/oauth2-proxy created created
secret/oauth2-proxy created
service/oauth2-proxy created

Give it a minute or two, then head out to If everything is configured correctly, you should be presented with a Google authentication consent screen. Use the email account you've allowed in the ConfigMap manifest. The end result after successful authentication should be a page displaying:

404 page not found

Good, it meants it's working. Now, we need to hook up a few dummy applications and see if our Google auth is actually enforced. For demo purposes, i've prepared a github repo with 2 example apps, one is nginx and another one is pgweb. Let's clone the repo:

git clone
cd k8s-oauth2-proxy-example

Prepare namespaces (we use namespace per app):

kubectl create namespace app1
kubectl create namespace app2

Our apps will be available at and respectively. Make sure to adjust the URLs per your setup.

In order to enable Google OAuth, we provide a few annotations in the Ingress manifest:

kind: Ingress
  name: app1
    # ... "" "$scheme://$host$escaped_request_uri"
    # ...

That tells nginx ingress controller to validate incoming requests, and initiate the oauth flow if sesion is not set or has expired.

Go head and create the app resources:

kubectl -n app1 apply -f app1/
kubectl -n app2 apply -f app2/

We should have both apps up and running. Navigate to - if oauth url is correct, it should take you straight to Google login page. Again, use one of the allowed emails to log in. On success, you should be taken back to the original URL and presented with a default nginx welcome screen.

And that's pretty much it. There's a lot of moving parts, but it's pretty easy to configure the auth flow once you have that ironed out. As a note, you should be using separate oauth proxy deployments for public and internal auth flows.

Visit my Github Repo with oauth2-proxy manifests and example apps.


  1. Prefer installing k8s resources with Helm? No problem, use oauth2-proxy helm chart instead.
  2. If your upstream apps need to handle OAuth account checking, oauth2-proxy forwards a few HTTP headers, like X-Forwarded-User (User ID), and X-Forwarded-Email. Refer to the official docs for additional options.
  3. To disable Google account selection, use --login-url= option.
  4. Use internal cluster oauth2 endpoint to speed up authentication calls on the ingress level, with annotation http://oauth2-proxy.oauth-proxy.svc.cluster.local:4180/oauth2/auth