Set up a self-hosted OIDC provider

Let us go through how you can set up an OIDC provider in a kubernetes cluster. We will use:

After going through the general setup we will set up a local proof of concept.

GLAuth

GLAuth is an LDAP server that handles user and group management. We will configure it to function as an authentication backend for Dex.

Kubernetes manifests

A helm chart does exist for GLAuth, but it does not suit our needs. We are configuring this application to use statically defined users and will therefore need a deployment, a service and a secret for its configuration.

Note - the communication between Dex and GLAuth is LDAP, but should be LDAP over SSL in a production cluster.

Service

The service exposes port 3893 where we will communicate over LDAP.

Show the service manifest
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
  name: ldap
  namespace: auth
spec:
  type: ClusterIP
  selector:
    app: ldap
  internalTrafficPolicy: Cluster
  ports:
    - port: 3893
      targetPort: ldap
      protocol: 'TCP'

Deployment

The deployment is a very locked-down single replica with rolling updates of GLAuth in version 2.4.0 that mounts and uses the following secret as its configuration.

Show the deployment manifest
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ldap
  namespace: auth
spec:
  replicas: 1
  revisionHistoryLimit: 0
  minReadySeconds: 0
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: ldap
  template:
    metadata:
      labels:
        app: ldap
    spec:
      terminationGracePeriodSeconds: 30
      automountServiceAccountToken: false
      containers:
        - name: ldap
          image: glauth/glauth:v2.4.0
          resources:
            limits:
              cpu: 10m
              memory: 12Mi
            requests:
              cpu: 5m
              memory: 12Mi
          ports:
            - name: ldap
              containerPort: 3893
          startupProbe:
            tcpSocket:
              port: ldap
            failureThreshold: 30
            periodSeconds: 10
          livenessProbe:
            tcpSocket:
              port: ldap
            failureThreshold: 1
            periodSeconds: 10
          securityContext:
            capabilities:
              drop:
                - ALL
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 9001
          volumeMounts:
            - name: ldap
              mountPath: /app/config/config.cfg
              subPath: ldap.cfg
      volumes:
        - name: ldap
          secret:
            secretName: ldap-config

Secret

The secret contains all the configuration of GLAuth which we will need to dive a little deeper into in the next section.

Show the secret manifest
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
apiVersion: v1
kind: Secret
metadata:
  name: ldap-config
  namespace: auth
stringData:
  ldap.cfg: |
    debug = false

    [ldap]
      enabled = true
      listen = "0.0.0.0:3893"
      tls = false

    [ldaps]
      enabled = false

    [backend]
      datastore = "config"
      baseDN = "dc=glauth,dc=com"

    [[users]]
      name = "bind"
      uidnumber = 9001
      primarygroup = 5000
      passsha256 = "is-replaced-with-ldap-hash"
      mail = "dex@service"
      [[users.capabilities]]
        action = "search"
        object = "*"

    [[users]]
      name = "alice"
      uidnumber = 1001
      primarygroup = 5003
      passsha256 = "ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f" # password123
      mail = "alice@example.com"
      givenname = "Alice"
      sn = "Liddell"

    [[users]]
      name = "bob"
      uidnumber = 1002
      primarygroup = 5002
      passsha256 = "ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f" # password123
      mail = "bob@example.com"
      givenname = "Bob"
      sn = "Builder"

    [[groups]]
      name = "services"
      gidnumber = 5000

    [[groups]]
      name = "editor"
      gidnumber = 5001
      includegroups = [ 5003 ]

    [[groups]]
      name = "viewer"
      gidnumber = 5002

    [[groups]]
      name = "admin"
      gidnumber = 5003

Configuration

The configuration consists of two parts, one for defining the statically defined users and groups and one for defining how to connect with the server.

Users and Groups

You can configure users and groups where a user can be a member of multiple groups, but this can feel a little quirky.

You can only assign a single group to a user, however you can configure groups to include members of another group.

The following example for Alice makes her a member of both the admin group and the editor group.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
    [[users]]
      name = "alice"
      uidnumber = 1001
      primarygroup = 5003
      passsha256 = "ef92b778bafe771e89245b89ecbc08a44a4e166c06659911881f383d4473e94f" # password123
      mail = "alice@example.com"
      givenname = "Alice"
      sn = "Liddell"

    [[users]]
      name = "admin"
      gidnumber = 5003

    [[groups]]
      name = "editor"
      gidnumber = 5001
      includegroups = [ 5003 ]
Line 4
This assigns Alice as a member of the admin group
Line 12
The admin group is number 5003
Line 17
The editor group is assigned to everyone in the number 5003 group

Connections and queries

The following configuration sets up the credentials Dex will use and defines the organization of the users.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
    [backend]
      datastore = "config"
      baseDN = "dc=glauth,dc=com"

    [[users]]
      name = "bind"
      uidnumber = 9001
      primarygroup = 5000
      passsha256 = "is-replaced-with-ldap-hash"
      mail = "dex@service"
      [[users.capabilities]]
        action = "search"
        object = "*"
Line 3
Sets the organization elements
Line 5-10
Define the service account for Dex
Line 11-13
Grants the service account search privileges

Dex

Dex can federate with many backends - GitHub, Google, LDAP, SAML, etc.

We are using the LDAP connector to talk to GLAuth.

Configuration

The following configuration for Dex defines a bind user to connect to the LDAP with and configuration on how to perform user and group queries.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
  connectors:
    - type: ldap
      id: glauth
      name: GLAuth
      config:
        host: ldap.auth.svc:3893
        insecureNoSSL: true
        bindDN: cn=bind,dc=glauth,dc=com
        bindPW: '{{ .Env.LDAP_PASSWORD }}'

        userSearch:
          baseDN: 'dc=glauth,dc=com'
          filter: '(objectClass=*)'
          username: 'uid'
          idAttr: 'uid'
          emailAttr: 'mail'
          nameAttr: 'uid'

        groupSearch:
          baseDN: 'ou=users,dc=glauth,dc=com'
          filter: '(objectClass=*)'
          nameAttr: 'ou'
          userMatchers:
            - userAttr: uid
              groupAttr: memberUid
Line 8-9
Sets credentials for the service account
Line 11-17
Define user queries that use uid as username
Line 19-25
Define group queries that use memberUid to resolve groups

Static Clients

Static clients are applications that will use Dex for authentication. Each client must be registered with Dex to allow the OIDC authorization flow.

1
2
3
4
5
6
  staticClients:
    - id: argocd
      name: ArgoCD
      secretEnv: ARGO_CD_CLIENT_SECRET
      redirectURIs:
        - http://argo.127.0.0.1.nip.io/auth/callback

Each client has:

Groups

The group membership defined for the users in GLAuth can be queried by Dex using the groupSearch. Any group memberships will result in tokens issued having a groups claim with the names of each group.

Note - the group claim is only included if the OIDC login flow is started with groups as one of the requested scopes.

Securing an Application with OIDC Groups and RBAC

The purpose of the GLAuth/Dex setup we have discussed so far can be used to secure an OIDC-capable application.

Let us use it to secure Argo CD including RBAC as an example.

The RBAC in Argo CD can use the groups claim from the OIDC token to make authorization decisions.

Argo CD has built-in support for OIDC and can be configured like the following:

1
2
3
4
5
6
7
8
  cm:
    create: true
    oidc.config: |
      name: Dex
      issuer: http://dex.127.0.0.1.nip.io
      clientID: argocd
      clientSecret: $argocd-secret:CLIENT_SECRET
      requestedScopes: ["openid", "profile", "email", "groups"]

The key parts:

Note - current version of Argo CD has a bug related to referenced secrets, that made it necessary to place the client secret directly in argocd-secret.

Argo CD uses OIDC groups for authorization. The RBAC configuration maps groups to roles:

1
2
3
4
5
6
  rbac:
    create: true
    policy.default: role:none
    policy.csv: |
      g, editor, role:admin
      g, viewer, role:readonly

Roles are assigned based on the groups claim:

Group ClaimAssigned RoleLogged in
editoradminYes
viewerreadonlyYes
(no match)(none)No

Proof of Concept

This section goes through running the above OIDC setup in a local Kubernetes cluster using Kind.

Overview

The main components needed for this proof of concept are:

This proof of concept is based on an earlier post about debugging OIDC logins.

The Kubernetes Cluster

The first two components on the list are Kind and the ingress controller. They are both nothing special in this setup.

Kind is a simple way to have a local cluster for testing purposes which means you can likely test this yourself on your own machine. Kind brings along the actual kubernetes applications like api server, scheduler, dns server etc.

An ingress controller is required software for a kubernetes cluster to route external traffic into the cluster.

These two components enable the cluster to host HTTP applications - and technically more, but again, this is irrelevant for our setup.

Network

We are taking certain shortcuts regarding the network setup like securing it with HTTPS/SSL for a few reasons:

Our setup will work regardless of whether you have SSL termination at the ingress controller or at each application - even though this local setup will use HTTP.

Notably we are also skipping setting up SSL connections between applications in the network on the inside of the cluster.

Application Network

The cluster will expose three web applications accessible through the ingress controller and one application only accessible from the inside of the cluster and only used directly by Dex.

GatewayKiDnedxAGrrgaofaCnDaGLAuth

We could assign a port number to each web application to serve them as http://127.0.0.1:8080, etc., but nip.io is a better option and allows us to use these addresses instead:

Note there would be issues with OIDC redirection and/or cookies, if we try to use the one application per port approach.

A nip.io address always resolves to the IP address in its name:

PrefixDotAddressDotSuffix
anything.i.want.127.0.0.1.nip.io

This means everything is served by the localhost which will work fine for your browser - but inside the cluster using localhost will be an issue we need to tackle.

Cluster DNS

The web applications are inside the cluster running on individual pods - or group of containers - which means they each have an IP and therefore 127.0.0.1 and localhost will be their own loopback interface.

This means that if the Dex pod made a request to http://argocd.127.0.0.1.nip.io, then Dex would connect to itself.

For a browser making HTTP request to the cluster this is not an issue, but part of the OIDC login flow requires the OIDC-capable application to make requests directly to the OIDC provider.

To solve this issue we are going to make CoreDNS - the DNS server that came with Kind - rewrite the DNS lookup for Dex to the service that points to Dex.

We can do this by updating the ConfigMap named coredns in the kube-system namespace and add the highlighted line:

13
14
15
        ready
        rewrite name dex.127.0.0.1.nip.io dex.dex.svc.cluster.local
        kubernetes cluster.local in-addr.arpa ip6.arpa {
Show the ConfigMap
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        rewrite name dex.127.0.0.1.nip.io dex.dex.svc.cluster.local
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }

Running the proof

The make file will:

Show Makefile
cluster := setting-up-oidc

tmp := /tmp/$(cluster)
kubectl := kubectl --context kind-$(cluster)

jar := $(tmp)/cookie.jar
curl := curl --cookie $(jar) --cookie-jar $(jar)

all: clean setup test

verify-deps:
	which docker kind kubectl helm curl yq base64 sha256 > /dev/null

clean: verify-deps
	-kind -q delete cluster --name $(cluster)
	-find $(tmp) -delete -mindepth 1
	-find build -not -name .gitignore -delete

setup: verify-deps configure create apply settle

configure: build/secrets.yaml build/argocd-secrets.yaml build/dex build/argo-cd build/grafana build/nginx-gateway-fabric
	-rm -rf $(tmp)
	-mkdir -p $(tmp)
	helm template dex build/dex -n dex -f values/dex.yaml --create-namespace | yq '.metadata.namespace = "dex"' > $(tmp)/c.0
	helm template argocd build/argo-cd -n argocd -f values/argocd.yaml --create-namespace > $(tmp)/c.1
	helm template grafana build/grafana -n grafana -f values/grafana.yaml --create-namespace > $(tmp)/c.2
	helm template ngf build/nginx-gateway-fabric -n nginx-gateway -f values/nginx.yaml --create-namespace > $(tmp)/c.3
	curl -fsL https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.5.1/standard-install.yaml > $(tmp)/c.4
	yq 'select(.kind == "CustomResourceDefinition")' $(tmp)/c.* build/nginx-gateway-fabric/crds/*.yaml > build/crd.yaml
	yq 'select(.kind and .kind != "CustomResourceDefinition")' $(tmp)/c.* manifests/*.yaml | \
	sed s/is-replaced-with-ldap-hash/$$(yq '. | select(.metadata.name == "ldap") | .data.LDAP_PASSWORD' build/secrets.yaml | base64 -d | sha256)/ \
	> build/applications.yaml

build/dex:
	helm pull dex --version 0.24.0 --repo https://charts.dexidp.io --destination build --untar

build/argo-cd:
	helm pull argo-cd --version 9.4.10 --repo https://argoproj.github.io/argo-helm --destination build --untar

build/grafana:
	helm pull grafana --version 11.3.2 --repo https://grafana-community.github.io/helm-charts --destination build --untar

build/nginx-gateway-fabric:
	helm pull --version 2.4.2 oci://ghcr.io/nginx/charts/nginx-gateway-fabric --destination build --untar

build/secrets.yaml:
	-rm -rf $(tmp)
	-mkdir -p $(tmp)
	export ARGOCD=$$(openssl rand -hex 20) GRAFANA=$$(openssl rand -hex 20) LDAP=$$(openssl rand -hex 20) && \
		kubectl create secret generic --dry-run=client --output yaml --type=Opaque --from-literal "ARGO_CD_CLIENT_SECRET=$$ARGOCD" --from-literal "GRAFANA_CLIENT_SECRET=$$GRAFANA" --namespace dex client-secrets > $(tmp)/a.0 && \
		kubectl create secret generic --dry-run=client --output yaml --type=Opaque --from-literal "CLIENT_SECRET=$$GRAFANA" --namespace grafana client-secret > $(tmp)/a.1 && \
		kubectl create secret generic --dry-run=client --output yaml --type=Opaque --from-literal "CLIENT_SECRET=$$ARGOCD" --from-literal "server.secretkey=$$(openssl rand -base64 32)" --namespace argocd argocd-secret | yq '.metadata.labels."app.kubernetes.io/part-of" = "argocd"' > $(tmp)/a.4 && \
		kubectl create secret generic --dry-run=client --output yaml --type=Opaque --from-literal "LDAP_PASSWORD=$$LDAP" --namespace dex ldap > $(tmp)/a.5 && \
	yq $(tmp)/a.* > build/secrets.yaml

build/argocd-secrets.yaml:
	-rm -rf $(tmp)
	-mkdir -p $(tmp)
	export API=$$(openssl rand -hex 20) REDIS=$$(openssl rand -hex 20) && \
		kubectl create secret generic --dry-run=client --output yaml --type=Opaque --from-literal "admin-user=api" --from-literal "admin-password=$$API" --namespace grafana credentials > $(tmp)/a.2 && \
		kubectl create secret generic --dry-run=client --output yaml --type=Opaque --from-literal "auth=$$REDIS" --namespace argocd argocd-redis > $(tmp)/a.3 && \
	yq $(tmp)/a.* > build/argocd-secrets.yaml

create:
	kind -q create cluster --config kind.config --name $(cluster)
	$(kubectl) create namespace nginx-gateway
	$(kubectl) create namespace dex
	$(kubectl) create namespace argocd
	$(kubectl) create namespace grafana
	$(kubectl) create namespace auth
	$(kubectl) apply --server-side -f build/crd.yaml
	$(kubectl) rollout status -n nginx-gateway deployment

apply:
	$(kubectl) apply --server-side --force-conflicts --wait -f build/secrets.yaml
	$(kubectl) apply --server-side --force-conflicts --wait -f build/argocd-secrets.yaml
	$(kubectl) apply --server-side --force-conflicts -f build/applications.yaml
	$(kubectl) rollout restart -n kube-system deployment/coredns

	$(kubectl) rollout status -n grafana deployment
	$(kubectl) rollout status -n dex deployment
	$(kubectl) rollout status -n argocd statefulset
	$(kubectl) rollout status -n argocd deployment

test: test-argocd test-grafana

test-argocd:
	-@mkdir -p $(tmp)
	-@rm -f $(jar)
	@touch $(jar)

	@$(curl) -fsLo $(tmp)/login.html http://argo.127.0.0.1.nip.io/auth/login
	@grep -o 'action="[^"]*"' < $(tmp)/login.html | cut -d\" -f2 | sed 's/&amp;/\&/g' > $(tmp)/path

	@$(curl) -fsD $(tmp)/header.log -XPOST -d "login=alice&password=password123" "http://dex.127.0.0.1.nip.io$$(cat $(tmp)/path)"
	@grep ^Location $(tmp)/header.log | cut -d' ' -f2 | tr -d '\r' > $(tmp)/endpoint

	@$(curl) -fso /dev/null "$$(cat $(tmp)/endpoint)"

	@grep argocd.token $(jar) | cut -f7- > $(tmp)/token

	@echo Token:
	@(cut -d. -f2 < $(tmp)/token|tr -d '\n'; echo '===') | base64 -d | yq --input-format json
	@echo

test-grafana:
	-@mkdir -p $(tmp)
	-@rm -f $(jar)
	@touch $(jar)

	@$(curl) -fsLo $(tmp)/login.html http://grafana.127.0.0.1.nip.io/login/generic_oauth
	@grep -o 'action="[^"]*"' < $(tmp)/login.html | cut -d\" -f2 | sed 's/&amp;/\&/g' > $(tmp)/path

	@$(curl) -fsD $(tmp)/header.log -XPOST -d "login=alice&password=password123" "http://dex.127.0.0.1.nip.io$$(cat $(tmp)/path)"
	@grep ^Location $(tmp)/header.log | cut -d' ' -f2 | tr -d '\r' > $(tmp)/endpoint

	@$(curl) -fso $(tmp)/more "$$(cat $(tmp)/endpoint)"

	@echo Session cookie:
	@grep grafana_session $(jar) | grep -v expiry | cut -f7-
	@echo

	@echo User profile:
	@$(curl) -fsLo - http://grafana.127.0.0.1.nip.io/api/user | yq --input-format json
	@echo

settle:
	@sleep 20

Output

The final output from a run (which takes a few minutes) looks like:

Token:
iss: http://dex.127.0.0.1.nip.io
sub: CgVhbGljZRIGZ2xhdXRo
aud: argocd
exp: 1.773689837e+09
iat: 1.773603437e+09
at_hash: TQaEW8Y06l6Ivrd60XUhyw
c_hash: WzVpvw-SA7UV_AoKxLjoSw
email: alice@example.com
email_verified: true
groups:
  - editor
  - admin
name: alice

Session cookie:
88ee2a5587f828d9ae0470415e41729b

User profile:
id: 2
uid: ffg4cr1u6el1cd
email: alice@example.com
name: alice
login: alice@example.com
theme: ""
orgId: 1
isGrafanaAdmin: true
isDisabled: false
isExternal: true
isExternallySynced: true
isGrafanaAdminExternallySynced: true
authLabels:
  - Generic OAuth
updatedAt: "2026-03-15T19:37:18Z"
createdAt: "2026-03-15T19:37:18Z"
avatarUrl: /avatar/c160f8cc69a4f0bf2b0362752353d060
isProvisioned: false

The output demonstrates that we can successfully log in using an OIDC login flow for both sample applications.

Docker on the CLI ยป