Posts

Electron Process Execution Failure with FSLogix

  Technical Note: Electron Process Execution Failure with FSLogix 1. Overview When running Electron-based applications in environments using FSLogix Profile or Office Containers , users may encounter issues where the Electron process fails to launch or execute properly . This behavior has been observed in Azure Virtual Desktop (AVD) , Windows Virtual Desktop (WVD) , and other environments where FSLogix filter drivers are active. 2. Symptoms Electron-based applications (e.g., desktop apps built on Electron, CLI wrappers) do not start , remain unresponsive, or terminate silently. No visible logs or error messages are generated by the application. Standard executables run correctly when placed outside of the FSLogix-controlled profile path (e.g., copying to C:\Temp allows execution). The issue is reproducible across all Electron apps in the FSLogix-managed profile. 3. Root Cause The issue is linked to FSLogix filter drivers ( frxdrv , frxdrvvt , frxccd ) interfe...

KB: Kubernetes how to bypass Tolerations/Taints/Affinity

  Note, by design it is "Scheduler" pods responsibility to place the pod in the correct Node. But if you manually specify the nodeName selector you are bypassing the "Scheduler" pod and that means you skipping the Tains/Tolerations and any affinity. Basically skipping the "Scheduler Logic" all together.  if you manually specify the nodeName field in a Kubernetes Pod spec, you are bypassing : Taints and tolerations Node affinity Scheduler logic entirely Here's what happens: When you set .spec.nodeName in a Pod or Deployment template, Kubernetes does not use the scheduler to determine where the pod should run. The pod is directly assigned to that node . As a result: Tolerations are ignored – the pod is placed even if it doesn’t tolerate the node’s taints. Affinity rules are ignored – because the scheduler isn’t consulted at all. NodeSelector and affinity constraints are not evaluated – they are bypassed . What can go ...

KB:Kubernetes Operators

  Kubernetes Operators are software extensions that use custom controllers to manage complex, stateful applications on Kubernetes. They follow the controller pattern —constantly observing the cluster’s current state and reconciling it with the desired state as defined by a Custom Resource (CR). Operators are typically used to automate: Installation and configuration Scaling and updates Backups, failovers, and recovery Application-specific lifecycle management They are not part of the Kubernetes core control plane , but they integrate tightly with it using Kubernetes-native APIs and resources. Operators encapsulate human operational knowledge into code, enabling advanced automation beyond what built-in controllers offer. References: -  https://www.cncf.io/blog/2022/06/15/kubernetes-operators-what-are-they-some-examples/

KB:Kubectl Cheatsheet

 https://kubernetes.io/docs/reference/kubectl/quick-reference/ 06/05/2025 kubectl Quick Reference This page contains a list of commonly used  kubectl  commands and flags. Note: These instructions are for Kubernetes v1.33. To check the version, use the  kubectl version  command. Kubectl autocomplete BASH source < ( kubectl completion bash ) # set up autocomplete in bash into the current shell, bash-completion package should be installed first. echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell. You can also use a shorthand alias for  kubectl  that also works with completion: alias k = kubectl complete -o default -F __start_kubectl k ZSH source < ( kubectl completion zsh ) # set up autocomplete in zsh into the current shell echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell FISH ...