Why does a Kubernetes namespace sometimes get stuck while deleting? When you delete a namespace, Kubernetes doesn’t delete it instantly. Instead, - It removes resources (pods, services, deployments, etc.) - It runs finalizers (small cleanup tasks before deletion) to make sure nothing breaks - It checks owner references & policies to decide what order things are deleted But if a finalizer doesn’t finish, the namespace gets stuck in Terminating. What you need to know: - Finalizers act like a lock until cleanup is done - Owner references decide which resources delete after the parent - Propagation policies control if children delete first, later, or stay behind To fix it: - Sometimes you need to manually patch/remove finalizers carefully. - Force deletion and re-creating the namespace works better 𝗗𝗲𝘁𝗮𝗶𝗹𝗲𝗱 𝗚𝘂𝗶𝗱𝗲: https://lnkd.in/gjzySRtW #kubernetes #devops
Why does a Kubernetes namespace get stuck while deleting?
More Relevant Posts
-
🚀 Day 48 of #90DaysOfDevOps — Difference between docker stop and docker kill Today I explored something simple but very important in container lifecycle management: how to gracefully stop a container vs how to forcefully kill it. Why Do We Need These Commands? When running applications inside containers, sometimes we want to shut them down nicely (so they save logs, close connections, etc.), and sometimes we just need to immediately terminate them (like in a stuck or unresponsive state). This is where docker stop and docker kill differ. docker stop → Graceful Shutdown How it works: Sends a SIGTERM signal first → gives the containerized application a chance to shut down gracefully. If the app doesn’t exit within 10 seconds, Docker sends SIGKILL to forcefully terminate it. docker stop <container_id> Use case: Use in production to allow apps to finish tasks, close DB connections, write logs, etc. docker kill → Immediate Termination How it works: Directly sends a SIGKILL signal → the container stops immediately without any cleanup. Example: docker kill <container_id> Use case: Use in development or emergency situations when a container is frozen, stuck, or not responding to docker stop Real-Life Analogy docker stop → Politely asking someone to leave the room. You give them a few seconds to pack their stuff and exit. docker kill → Security comes in and drags them out immediately—no time for packing! When to Use Which? Use docker stop → For normal shutdown in production. Use docker kill → For emergency cases when graceful shutdown doesn’t work. Today’s Takeaway docker stop = Graceful docker kill = Forceful #DevOps #Docker #Containers #Linux #Automation #CloudComputing #Kubernetes #Networking #90DaysOfDevOps #mentor Trupti Mane
To view or add a comment, sign in
-
-
🚀 Day 51 of #90DaysOfDevOps — What is a Dockerfile? Today, I explored the Dockerfile, the core building block for creating Docker images efficiently and consistently. What is a Dockerfile? A Dockerfile is a simple text file containing a set of instructions that Docker uses to build an image automatically. Instead of manually setting up environments, we define everything dependencies, configurations, commands in this file. It’s the blueprint for building reproducible, portable container images. Why Use a Dockerfile? Automation: Eliminates manual setup builds are consistent. Reusability: Same Dockerfile can be used across teams and environments. Version Control: Track changes easily with Git. Portability: Run your app anywhere “it works on my machine” solved! Example Dockerfile : FROM ubuntu RUN apt update -y RUN apt install nginx -y EXPOSE 80 WORKDIR /var/www/html RUN touch index.html RUN echo “<h1>This is my dockerfile </h1>” > index.html CMD [”nginx”, “-g ”, “ daemon off ; ”] Today’s Takeaway A Dockerfile transforms manual setup into a single, version-controlled definition making Docker images automated, portable, and reproducible. It’s a must-know skill for every DevOps engineer! #DevOps #Docker #Dockerfile #Containers #CloudComputing #Automation #Kubernetes #Linux #DockerImages #LearningJourney #90DaysOfDevOps #Mentor Trupti Mane #Day51
To view or add a comment, sign in
-
-
#90DaysOfContainers — Day 3/90 𝗪𝗵𝗮𝘁 𝗥𝗲𝗮𝗹𝗹𝘆 𝗛𝗮𝗽𝗽𝗲𝗻𝘀 𝗪𝗵𝗲𝗻 𝗬𝗼𝘂 𝗧𝘆𝗽𝗲 𝗱𝗼𝗰𝗸𝗲𝗿 𝗿𝘂𝗻? We’ve all done it — You install Docker. You run: 𝘥𝘰𝘤𝘬𝘦𝘳 𝘳𝘶𝘯 𝘩𝘦𝘭𝘭𝘰–𝘸𝘰𝘳𝘭𝘥 And boom — it prints “Hello from Docker!” …but have you ever wondered what actually happened behind the scenes? When you type docker run, a lot happens silently under the hood 👇 • Docker checks if the image exists locally. If not found → it pulls it from Docker Hub.(Just like how you clone code from GitHub) • Docker creates a new container from that image. It creates a lightweight isolated environment. Your container gets its own: 1. File system 2. Network stack 3. Process space 4. Runtime • The process defined in the image starts executing. For example, in hello-world, it just prints a message and exits. Docker containers don’t have a full operating system. They share your system’s kernel — that’s why they start in milliseconds, not minutes like virtual machines. #Docker #DevOps #Containers #SoftwareEngineering
To view or add a comment, sign in
-
For those new to Docker, think of Docker Hub as GitHub for Docker images. When a container starts, it pulls the image—complete with dependencies, the application, and necessary packages—from Docker Hub. This image is then copied to the machine initiating the start. To fully manage containers, remember that stopping a container doesn't remove it; a separate removal command is needed. #Docker #containers #DockerHub #DevOps #technology
To view or add a comment, sign in
-
🚢 𝙀𝙫𝙚𝙧 𝙬𝙤𝙣𝙙𝙚𝙧𝙚𝙙 𝙝𝙤𝙬 𝘿𝙤𝙘𝙠𝙚𝙧 𝙖𝙘𝙩𝙪𝙖𝙡𝙡𝙮 𝙬𝙤𝙧𝙠𝙨? I just found this awesome diagram and had to share it! 👇 🔍 Here’s a quick breakdown: ✅ Docker Client – Where you run commands like: 🔹 𝙙𝙤𝙘𝙠𝙚𝙧 𝙗𝙪𝙞𝙡𝙙 🛠️ 🔹 𝙙𝙤𝙘𝙠𝙚𝙧 𝙥𝙪𝙡𝙡 📥 🔹 𝙙𝙤𝙘𝙠𝙚𝙧 𝙧𝙪𝙣 🚀 ✅ Docker Host – The engine that handles your containers 🖥️ 🔹 Images (e.g., Ubuntu, NGINX) are stored locally 🔹 Containers are created from these images and run your applications ✅ Docker Registry – A central place (like Docker Hub 🌐) to store and share images. 🧠 Key takeaway: Docker makes it simple to build, ship, and run applications anywhere. It’s a true game-changer for portability and scalability in modern development. 💡 💬 If you’re new to Docker or just need a refresher, this diagram is a perfect way to visualize how everything connects! 💬 Check out my portfolio and projects here 👉 https://lnkd.in/dtbFkn43 #Docker #DevOps #Containers #CloudComputing #SoftwareEngineering #TechSimplified
To view or add a comment, sign in
-
-
Just read this and keep it in your mind - it will definitely helpful in future. Kubernetes resources, such as namespaces, pods, or persistent volumes, can become stuck in a "Terminating" state - One of the reason for that is "Finalizers" Finalizers is a kubernetes feature which tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. When you tell Kubernetes to delete an object that has finalizers specified for it ------> Kubernetes API marks the object for deletion ----> returns a 202 status code (HTTP "Accepted"). ---> Target object remains in a terminating state untill the completion of the actions defined by the finalizers Practical Use Cases for Finalizers in Kubernetes: 1. Protecting Protecting Persistent Volumes - Finalizers can help to avoid accidental deletion of resources. 2. Run custom cleanup logic in operators/controllers before object removal 3 Ensure related resources (like child objects) are properly deleted first etc- etc- - As always, the comment section is yours 🙂 share your insights, point out corrections, or drop related article links. Let’s learn (and unlearn) together. #linux #kubernetes #devops #sre #devopsinterview
To view or add a comment, sign in
-
Building on the a Docker homelab. Part 2 is live - Homepage dashboard deployment. Config files, YAML debugging, multiple restarts. How many docker compose restarts will it take? As many as it takes 🫠 📝 Blog: blog.artezchapman.com 💻 Code: https://lnkd.in/ejFwhCyP Part 3 coming soon: Nginx, Grafana, making everything work together. #DevOps #Docker #Homelab #InfrastructureAsCode
To view or add a comment, sign in
-
🚀 Let’s talk about Pods in Kubernetes! In Kubernetes, the lowest level of deployment is the Pod. We can’t directly create containers in K8s — because enterprise-grade environments need declarative configurations, not imperative ones. All the Docker commands we use are written as YAML manifests in Kubernetes. When you deploy a Pod, K8s may even create a second helper container inside it (called a sidecar) to manage configuration or logs. Once a Pod is created, it gets a Cluster IP, not the individual containers. That’s how you access it — via the Pod’s Cluster IP, not by container IPs. Here are some commonly used commands 👇 kubectl get nodes # Get information about all nodes (control plane & worker nodes) kubectl get pods # List all pods in the default namespace kubectl get pods -o wide # Get pods with additional info like IP address kubectl describe pod <pod-name> # Detailed info about a specific pod kubectl logs <pod-name> # View logs for troubleshooting errors If something goes wrong, kubectl logs is your best friend! 💪 I’m still learning and open to mistakes 🙈 Would love your feedback or corrections in the comments 👇 #Kubernetes #DevOps #CloudNative #Containers #K8s #Docker #LearningInPublic #DevOpsCommunity #OpenToFeedback
To view or add a comment, sign in
-
-
#Day-02 Today, I explored Dockerizing a Simple Application: 1. Setup / Prerequisites: Docker Desktop (local) or Play with Docker (cloud-based) Prepared environment to build and run containers 2. Cloning the Application Repository: Fetched the sample application code from GitHub: git clone https://lnkd.in/diHWv-f2 3. Understanding the Dockerfile: Layers: Base Image → Working Directory → Copy Files → Install Dependencies → Expose Port → CMD Run Each instruction builds a layered image, improving efficiency and portability 4. Building the Docker Image: Command: docker build -t getting-started-app . Creates a portable image containing the application and all dependencies 5. Pushing the Image to Docker Hub: Tagged and pushed the image to Docker Hub for sharing and reuse Enables “build once, run anywhere” approach 6. Running and Troubleshooting the Container: Run: docker run -p 3000:3000 YOUR_USERNAME/getting-started-app Inspect and debug using: docker exec -it <container-id> sh Ensures the application runs consistently in any environment Key Takeaway: Docker makes containerization simple, allowing developers to package, ship, and run applications efficiently while ensuring environment consistency. Read More: https://lnkd.in/duvNuRUi Thanks to Piyush sachdeva & The CloudOps Community for the insightful session on Docker Image. #Docker #Containers #DevOps #CI_CD #Kubernetes #SoftwareEngineering #TechLearning #CloudNative #LearningJourney
To view or add a comment, sign in
-
-
💡 10 Docker Compose Commands Every Dev & SysAdmin Should Know If you’re working with containers, chances are Docker Compose is already your daily companion. Here’s a quick cheat sheet with the top 10 commands you’ll use again and again 👇 1️⃣ docker-compose up -d → Start your containers in the background. 2️⃣ docker-compose down → Stop & remove containers, networks, and volumes. 3️⃣ docker-compose ps → See the status of your running services. 4️⃣ docker-compose logs -f → Stream real-time logs (super useful for debugging). 5️⃣ docker-compose exec <service> bash → Jump inside a running container. 6️⃣ docker-compose build → Build or rebuild services from your Dockerfile. 7️⃣ docker-compose restart → Restart services with one command. 8️⃣ docker-compose stop → Stop services without removing them. 9️⃣ docker-compose start → Start services that were previously stopped. 🔟 docker-compose pull → Fetch the latest images from the registry. ⚡ Bonus tip: Combine them with -f <file.yml> when working with multiple environments (dev, staging, prod). 👉 Which one is YOUR lifesaver in daily work? #Docker #DevOps #SysAdmin #Containers #CheatSheet
To view or add a comment, sign in