How to get a kubeconfig for an existing cluster?
Cluster API server URL CA certificate (or confirm it’s OK to skip TLS verification) Auth method: service account token, client cert/key, or OIDC Desired context name and default namespace
Meanwhile, here are ready-to-use options you can fill in. Option 1: Minimal kubeconfig (token-based)
Use when you have a bearer token (e.g., from a ServiceAccount).
Save as kubeconfig.yaml:
yaml
Copy apiVersion: v1 kind: Config clusters:
cluster:
server: https://YOUR-APISERVER-URL
# Use one of the two lines below (prefer CA data or file)
certificate-authority-data: BASE64_CA_CERT
# certificate-authority: /path/to/ca.crt
users:
user:
token: YOUR_BEARER_TOKEN
# Or use OIDC fields if applicable:
# auth-provider:
# name: oidc
# config:
# id-token: YOUR_ID_TOKEN
# refresh-token: YOUR_REFRESH_TOKEN
# client-id: YOUR_CLIENT_ID
# client-secret: YOUR_CLIENT_SECRET
# idp-issuer-url: https://issuer.example.com
contexts:
context:
cluster: my-cluster
user: my-user
namespace: default
current-context: my-context
Base64-encode files if using the -data fields:
CA: macOS/Linux: base64 -w0 ca.crt (use -w0 or -b 0 to avoid line wraps)
Test it:
KUBECONFIG=./kubeconfig.yaml kubectl get ns
Option 2: Create kubeconfig via kubectl commands
This builds a new file from local certs/token you already have.
bash
Copy # Write to a brand-new kubeconfig export KUBECONFIG=./kubeconfig.yaml
# Cluster (use either CA file or embed the CA) kubectl config set-cluster my-cluster \
# User (token auth) kubectl config set-credentials my-user \
# Alternatively: client cert/key auth # kubectl config set-credentials my-user \ # –client-certificate=/path/to/client.crt \ # –client-key=/path/to/client.key \ # –embed-certs=true
# Context kubectl config set-context my-context \
kubectl config use-context my-context kubectl get ns
Getting a ServiceAccount token
If you don’t already have a token, you can create a ServiceAccount and get a token for it.
Kubernetes 1.24+:
bash
Copy kubectl create ns myns kubectl -n myns create sa ci-bot kubectl create clusterrolebinding ci-bot-admin \
# Get a short-lived token to paste into your kubeconfig kubectl -n myns create token ci-bot
Kubernetes ≤1.23 (legacy secret-based tokens):
bash
Copy kubectl -n myns get secret | grep ci-bot kubectl -n myns get secret <SECRET_NAME> -o jsonpath='{.data.token}' | base64 -d kubectl -n myns get secret <SECRET_NAME> -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
Then use the token and CA in Option 1 or Option 2 above. Cloud-provider shortcuts
EKS (AWS CLI configured): aws eks update-kubeconfig --name <cluster> --region <region> GKE (gcloud configured): gcloud container clusters get-credentials <cluster> --zone <zone> --project <project> AKS (az configured): az aks get-credentials --resource-group <rg> --name <cluster>
If you share your server URL, CA, and preferred auth method, I can generate a complete kubeconfig for you with the exact values. Warning: Some URLs are invalid
Relevant Posts 0
No tweets found download admin config from kubernetes conrtoller Relevant Posts 0
No tweets found GPT-5 Thinking
Parsing Kubernetes Config Retrieval
I need to clarify the user's request about downloading the admin configuration from the Kubernetes controller. They’re likely looking for the admin kubeconfig, which is usually found at /etc/kubernetes/admin.conf when using kubeadm. They probably want steps for different setups. If they have SSH access, they can use SCP to get the file. Managed services like EKS and GKE don't have an admin.conf; users will have to follow cloud-specific commands. There are different locations for other setups too, like k3s, microk8s, and OpenShift. Additionally, I need to remember about potential certificate expiration! Clarifying admin.conf Details
I realized that the admin.conf file uses a client certificate signed by the kubernetes-admin certificate stored in /etc/kubernetes/pki, typically valid for one year. It can be renewed with kubeadm certs renew admin.conf and followed by a copy. The file permissions are usually root-owned with 0644, but copying might require sudo to change owners. When using SCP, I need to use sudo to read the file, and I might have to adjust the server address in the cluster configuration as well. Don't forget to ensure the certificate-authority-data is present! Creating an admin.conf Retrieval Guide
I want to put together a clear guide for retrieving admin.conf from a kubeadm-based cluster via SSH.
First, locate a control plane node where the kube-apiserver is running, and note that the path to admin.conf is /etc/kubernetes/admin.conf.
For copying, if you have direct access with sudo, I have three options:
Use SSH with sudo to cat the file directly. SCP with root login. Copy the file to a temporary location, then SCP it over.
Next, set your KUBECONFIG path and test the connection. If the apiserver endpoint is set to an internal IP, you’ll need to modify it for external access. Updating Server Lines and Cert Renewal
I need to show how to change the server line in admin.conf using sed, since yq isn't standard. A command like sed -i.bak 's#server: https://127.0.0.1:6443#server: https:<lb-or-node-ip>:6443#' admin.conf should do the trick. If the certificate has expired, I can renew it by running sudo kubeadm certs renew admin.conf on the node, which regenerates admin.conf with new cert/key. For K3s, I'll find the config file at /etc/rancher/k3s/k3s.yaml, and I need to replace the default server value and ensure the CA and token are correct for authentication. Providing Kubernetes Config Download Instructions I'm putting together a clear guide for downloading the admin config from a Kubernetes controller. The best command to retrieve it is: ssh user@control-plane 'sudo cat /etc/kubernetes/admin.conf' > admin.conf Alternatively, they can use SCP if preferred. After downloading, they should set the KUBECONFIG path, test connectivity, and fix any server endpoint issues. For K3s and MicroK8s, there are specific paths to their config files. If they want a kubeconfig created, I’ll need the node IP or endpoint. Keep permissions and security in mind! Here’s the quickest, safe way to grab the admin kubeconfig from a control-plane node. Most common (kubeadm clusters) Path on the node: /etc/kubernetes/admin.conf Option A — one-liner (works even without root scp) bash Copy ssh <user>@<controller-host> 'sudo cat /etc/kubernetes/admin.conf' > admin.conf Option B — with scp (if root login works) bash Copy scp root@<controller-host>:/etc/kubernetes/admin.conf ./admin.conf Option C — staged copy with sudo bash Copy ssh <user>@<controller-host> 'sudo cp /etc/kubernetes/admin.conf /tmp/admin.conf && sudo chown $(id -u):$(id -g) /tmp/admin.conf' scp <user>@<controller-host>:/tmp/admin.conf ./admin.conf ssh <user>@<controller-host> 'sudo rm /tmp/admin.conf' After downloading Test it: bash Copy export KUBECONFIG=$PWD/admin.conf kubectl get nodes If the server in the file points to 127.0.0.1 or an internal-only IP, update it to a reachable address (VIP/LB or a control-plane node IP): bash Copy kubectl config set-cluster kubernetes \ –server=https:<reachable-address>:6443 \
Or edit the “server:” line in admin.conf directly.
Notes
Security: admin.conf contains a client cert and key with cluster-admin privileges. Store it securely and limit file permissions. Expired admin cert? On the control-plane node:
bash
Copy sudo kubeadm certs renew admin.conf # then re-download the file as above
Other common setups
k3s: file is /etc/rancher/k3s/k3s.yaml
Download:
bash
Copy ssh <user>@<server> 'sudo cat /etc/rancher/k3s/k3s.yaml' > k3s.yaml
Replace 127.0.0.1 with the server’s reachable address:
bash
Copy sed -i.bak 's/127.0.0.1/<server-ip>/g' k3s.yaml
RKE2: /etc/rancher/rke2/rke2.yaml (same approach as k3s) MicroK8s: run on the node and capture the output
bash
Copy ssh <user>@<node> 'sudo microk8s config' > microk8s.conf
kind: from your machine that runs kind
bash
Copy kind get kubeconfig --name <cluster-name> > kind.conf
Managed control planes (EKS/GKE/AKS)
You can’t access an “admin.conf” on provider-managed control planes. Use the provider CLI:
EKS: aws eks update-kubeconfig --name --region
GKE: gcloud container clusters get-credentials --zone --project
AKS: az aks get-credentials --resource-group --name
If you tell me which distro you’re using (kubeadm, k3s, microk8s, etc.) and the controller’s hostname/IP, I can give you the exact command tailored to your setup.