Posts

Showing posts with the label Kubernetes

KB:How OIDC Connects AKS Service Accounts to Azure Managed Identities MI

OIDC (OpenID Connect) is the glue that binds the Kubernetes Service Account (SA) → Azure Managed Identity (MI) → Federated Credential chain. Here’s what’s really happening behind the scenes in AKS + Microsoft Entra ID (formerly Azure AD). 🔐 The Role of OIDC in the AKS–Azure Identity Chain 1. OIDC Issuer: The Cluster’s Identity Provider When you enable OIDC on your AKS cluster, Azure assigns it an OIDC issuer URL , like: https: //eastus.oic.prod-aks.azure.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/ This URL acts like a mini identity provider (IdP) for your cluster. Inside the cluster, Kubernetes Service Accounts can issue JWT tokens that are signed by this OIDC issuer. Each token includes claims like: iss → the OIDC issuer URL sub → system:serviceaccount:<namespace>:<serviceaccount> aud → the audience you request when you create the token Those claims prove “This token was issued by this AKS cluster for this Service Account.” 2. Federated Cr...

KB: Kubernetes how to bypass Tolerations/Taints/Affinity

  Note, by design it is "Scheduler" pods responsibility to place the pod in the correct Node. But if you manually specify the nodeName selector you are bypassing the "Scheduler" pod and that means you skipping the Tains/Tolerations and any affinity. Basically skipping the "Scheduler Logic" all together.  if you manually specify the nodeName field in a Kubernetes Pod spec, you are bypassing : Taints and tolerations Node affinity Scheduler logic entirely Here's what happens: When you set .spec.nodeName in a Pod or Deployment template, Kubernetes does not use the scheduler to determine where the pod should run. The pod is directly assigned to that node . As a result: Tolerations are ignored – the pod is placed even if it doesn’t tolerate the node’s taints. Affinity rules are ignored – because the scheduler isn’t consulted at all. NodeSelector and affinity constraints are not evaluated – they are bypassed . What can go ...

KB:Kubernetes Operators

  Kubernetes Operators are software extensions that use custom controllers to manage complex, stateful applications on Kubernetes. They follow the controller pattern —constantly observing the cluster’s current state and reconciling it with the desired state as defined by a Custom Resource (CR). Operators are typically used to automate: Installation and configuration Scaling and updates Backups, failovers, and recovery Application-specific lifecycle management They are not part of the Kubernetes core control plane , but they integrate tightly with it using Kubernetes-native APIs and resources. Operators encapsulate human operational knowledge into code, enabling advanced automation beyond what built-in controllers offer. References: -  https://www.cncf.io/blog/2022/06/15/kubernetes-operators-what-are-they-some-examples/

KB:Kubectl Cheatsheet

 https://kubernetes.io/docs/reference/kubectl/quick-reference/ 06/05/2025 kubectl Quick Reference This page contains a list of commonly used  kubectl  commands and flags. Note: These instructions are for Kubernetes v1.33. To check the version, use the  kubectl version  command. Kubectl autocomplete BASH source < ( kubectl completion bash ) # set up autocomplete in bash into the current shell, bash-completion package should be installed first. echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell. You can also use a shorthand alias for  kubectl  that also works with completion: alias k = kubectl complete -o default -F __start_kubectl k ZSH source < ( kubectl completion zsh ) # set up autocomplete in zsh into the current shell echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell FISH ...