T O P

  • By -

Speeddymon

> I noticed that it is possible to combine 02 kubeconfig file into a one config file and use contexts but I don't have contexts in my kubeconfig files. You can add 2 contexts with `kubectl config set-context` Then you could copy in your second clusters's config, and switch between them with `kubectl config use-context`


tech1ndex

To further build on this you could also use something like [kubectx](https://github.com/ahmetb/kubectx) for a faster switching experience.


Life-City1758

Kubectx is the best, I will also plug Lens as it can source multiple config files and store its own config.


LeadershipFamous1608

Thank you for the comment u/Speeddymon. I tried to merge the 2 config files but ending up with only 1 file being added to the .kube/config file. export KUBECINFIG=\~/cluster1-config:\~/cluster2-config kubectl config view --flatten > \~/.kube/config When I run "kubectl config get-contexts" it always shows one output (kubernetes-admin@kubernetes). I think the issue is in both of my cluster1 and cluster 2 config files **cluster name** and **context name** **are same**. So, I am now going to see if I can rename the cluster name and context names of individual files and then merge. 1. Do I have to rename the cluster name and context name within the master node and then copy the files into the machine or can I directly do it using the already copied config files from within my computer without going to the master node of each cluster. 2. Should the cluster name and context names match with the config files that are inside the master node of each cluster?


myspotontheweb

I put the kubeconfig file for each cluster in the directory ```$HOME/.kube/clusters``` directory and set the following variable in the ```$HOME/.bashrc``` file: ``` export KUBECONFIG=~/.kube/config:$(find ~/.kube/clusters -type f | tr '\n' ':') ``` To ensure each file has a unique context, I have a script that updates each cluster configuration ``` #!/bin/bash # # Ensure that each kubeconfig file contains a unique # # 1. Current context # 2. Context # 3. Cluster name # 4. User name # # Designed to support the following KUBE_CONFIG assignment where contents are being merged # # export KUBECONFIG=~/.kube/config:$(find ~/.kube/clusters -type f | tr '\n' ':') # for FILE in $(find ~/.kube/clusters -type f) do    echo "Processing: $FILE"    CONTEXT=$(basename $FILE)    yq ".current-context = \"$CONTEXT\"" $FILE -i    yq ".contexts[0].name = \"$CONTEXT\"" $FILE -i    yq ".contexts[0].context.cluster = \"$CONTEXT\"" $FILE -i    yq ".contexts[0].context.user = \"$CONTEXT\"" $FILE -i    yq ".clusters[0].name = \"$CONTEXT\"" $FILE -i    yq ".users[0].name = \"$CONTEXT\"" $FILE -i    chmod 600 $FILE done ``` So now I have a list of clusters I can access using ```kubectx``` Hope this helps.


figaro42

I strongly recommend using Kubie - https://github.com/sbstp/kubie It alters your prompt when you select a context so you always know which cluster you're working on. You don't want to delete workloads from the wrong cluster. Ask me how I know 😀.


fuzzy812

You could also use Lens. It’s a great dashboard


Mirkens

Use kubie It's a rust tool that lets you.choose your context and also lists all of them https://github.com/sbstp/kubie It's pretty easy to setup and use


BoKKeR111

I have been using direnv. It allows me to set a context based on the folder I am standing in. This way you can have multiple terminals with different environments open the same time. Something kubectl config set-context can’t do afaik 


reddit_clone

I do the same! One folder named after the cluster. CD'ing sets things up direnv. With some kubectl promt magic, the promt changes to the 'cluster+context' name! I have dozens of terminals open at the same time. Mistakes are much reduced by this setup (I have iTerm profiles with different colors/fonts to indicate dev/canary/prod so that I don't run a command in production thinking it was dev...) Log files you download stay in that folder! The advantages are huge. Direnv is really under appreciated. I also use the same way with AWS credentials so that I can target different accounts from different terminals


youngpadayawn

You could set an alias for each cluster, e.g. `alias k01='KUBECONFIG=~/.kube/config01 kubectl'`


hijinks

i use `direnv` so when i cd into like `./k8s/prod` my config is set to the prod cluster. My direnv file which likes in ./k8s/prod looks like ❯ cat .envrc export KUBECONFIG=$(pwd)/kubeconfig


S0methingdiff

I'm using a tool called kubeswitch, but kubectx is fine too. Lens feels too bloated, it starting on my i5-11400 with 16 gb of RAM for a twice longer time than any AAA game 🤷‍♂️


LeadershipFamous1608

I changed the cluster name and contexts inside my actual clusters and .kube/config files inside the macine where i am going to access both clusters. root@pve1:\~# kubectl config view apiVersion: v1 clusters: * cluster: certificate-authority-data: DATA+OMITTED server: [https://192.168.xxx.10:6443](https://192.168.xxx.10:6443) name: k8s-cluster1 * cluster: certificate-authority-data: DATA+OMITTED server: [https://192.168.xx.10:6443](https://192.168.xx.10:6443) name: k8s-cluster2 contexts: * context: cluster: k8s-cluster1 user: kubernetes-admin name: k8s-cluster1-admin * context: cluster: k8s-cluster2 user: kubernetes-admin name: k8s-cluster2-admin current-context: k8s-cluster2-admin kind: Config preferences: {} users: * name: kubernetes-admin user: client-certificate-data: DATA+OMITTED client-key-data: DATA+OMITTED When I export the config files manually I can list nodes on both clusters. Also it lists the contexts as below; root@pve1:\~# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE k8s-cluster1-admin k8s-cluster1 kubernetes-admin \* k8s-cluster2-admin k8s-cluster2 kubernetes-admin When I select "**k8s-cluster1-admin**" context I can list down the nodes. But when I switch the context to "**k8s-cluster2-admin** " and run the "**kubectl get nodes**" it says **error: You must be logged in to the server (Unauthorized)** Below commands succeed, but when I merge the configs together it doesn't work. I am not sure if that is because in both clusters the user is same **kubectl --kubeconfig=cluster1-config get nodes** **kubectl --kubeconfig=cluster2-config get nodes**


LeadershipFamous1608

RESOLVED: The issue happened to be with the user. Both clusters had a single username, so during the merge I guess usernames are overlapping. Therefore, I did the following; Copied the existing .kube/config files from the master nodes. Renamed the user names in both copied files merged the files Now everything works Thank you for all the help and comments :)


kneticz

I keep my kubeconfigs separate, in .kube/contexts. I just have a script to iterate over the contexts and assign them to the KUBECONFIG env var: just add the following to your bashrc: export KUBECONFIG=$(for YAML in $(find ${HOME}/.kube/contexts -name '*.yaml') ; do echo -n ":${YAML}"; done) or, if you use windows its a bit uglier as expected in PS, just add this to your PS Profile: $kube_dir = '~\.kube\contexts' if (!(Test-Path -Path "${kube_dir}")) {     Write-Output "Could not find path '${kube_dir}'"     Exit } $kubeconfigs = [System.Collections.Generic.List[string]]::new() Get-ChildItem "${kube_dir}" |     Foreach-Object {         $kubeconfigs.Add($_.FullName)     } Write-Output "Found $($kubeconfigs.Count) kubeconfig files." $kube_path = ($kubeconfigs -join ";") [Environment]::SetEnvironmentVariable("KUBECONFIG", "${kube_path}", [System.EnvironmentVariableTarget]::User) Write-Output "'KUBECONFIG' user environment variable updated."