Sign in to view Mrugesh’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Mrugesh’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Bengaluru, Karnataka, India
Sign in to view Mrugesh’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
401 followers
240 connections
Sign in to view Mrugesh’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Mrugesh
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Mrugesh
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Mrugesh’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Services
Experience & Education
-
freeCodeCamp
********* ********** — ***** ************** & **** ******
-
*** ****
********* ********** & *******
-
******** ****è***
*&* *********** *******
-
**** ******* ********** ** **********, ******
*.**** undefined undefined
-
-
******** *********
*** ***** ****** ****** *********** *********** undefined
-
View Mrugesh’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Honors & Awards
-
Cloudverse 100: The people building the next generation of the internet
Business Insider
Got featured in the Open-Source developer's category on "Cloudverse 100: The people building the next generation of the internet" by Business Insider.
Link: https://www.businessinsider.com/cloudverse-100-top-people-building-the-next-generation-internet-2022-11
View Mrugesh’s full profile
-
See who you know in common
-
Get introduced
-
Contact Mrugesh directly
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
Explore more posts
-
Chuck Keith
I’ve never been this obsessed with an automation tool before. N8N is open-source, private, local, and stupid powerful. Let me walk you through what it can do 👇 Firstly, the setup is stupid easy. You can install N8N on-prem with Docker (even on a Raspberry Pi)... OR (what I recommend) use the cloud. Now imagine this: You want your favorite tech news every morning - Hacker News, BleepingComputer, subreddits etc. You build a workflow that grabs the latest stories and dumps them into a Discord channel. Done. No checking tabs. It comes straight to you. But wait… that’s just the beginning. You can schedule triggers, limit how many stories you get, and format everything cleanly with JSON. No coding needed. It’s all visual, and insanely flexible. Okay… now it gets wild. You can run actual system commands right inside your workflow. Want to ping 1.1.1.1 every morning and see if the internet’s up? Boom. Add a node. Done. It even shows the output and you can send that straight into Discord too. But let’s turn up the insanity: add AI. You drop in an AI node (OpenAI, Claude, or even a local model like LLaMA)… Then tell it: “Summarize this article in 2 sentences.” Bam! Your tech news now has a TL;DR, customized by you. You can merge those AI summaries with system output, format them with variables, and fire them off anywhere: Email, Discord, Slack, whatever. Now your daily digest is fully automated, summarized, and perfectly formatted. YouTube? Yep. Every channel has an RSS feed. N8N pulls in the latest uploads, filters by date, and lets you pick which videos to keep. Add AI to summarize the transcript or rate the video? You can do that too. Still not enough? Meet AI Agents. You can literally chat with your home lab. “Hey N8N, is Terry up?” (Terry is my server.) N8N runs the ping, gets the output, and responds with a full personality using AI. Yeah... it’s nuts. And if your N8N is in the cloud (like mine), you can use TwinGate to talk securely to your local stuff. That means your automations can ping your router, check server uptime, even SSH into machines - all from the cloud! Let this sink in: You can build an AI-powered automation agent that: - Pulls tech news - Summarizes with LLaMA - Checks your internet - Pings your server - Notifies you in Discord All without you lifting a finger This tool is scary powerful. You’ll be automating your life before you realize it. And the best part? It’s free. Open source. Local. Private. Want to try it? Get started here 👉 https://lnkd.in/giYRuHGG and use coupon code NETWORKCHUCK . (I walk you through everything in this video: https://lnkd.in/gMWH-U63) #n8n #automation #opensource #AI #homeLab #networkChuck
2,106
108 Comments -
Govardhana Miriyala Kannaiah
Kubernetes Cost Reduction Techniques 👇 Each technique enables organizations to optimize Kubernetes usage and minimize expenses. Right Sizing → Adjust CPU/memory to avoid over-provisioning. Auto Scaling → Scale nodes/pods dynamically to reduce idle costs. Pod Disruption Budget → Limit pod downtime during disruptions for availability. Node Tainting → Prioritize workloads using taints and tolerations. Image Optimization → Use smaller images for faster pulls, lower costs. Spot Instances → Run non-critical tasks on cheaper, interruptible instances. 53K+ read my DevOps and Cloud newsletter: https://lnkd.in/gg3RQsRK What do we cover: DevOps, Cloud, Containerization, IaC, GitOps, MLOps 🔁 Consider a Repost if this is helpful
980
38 Comments -
Sahn Lam
Virtualization vs Containerization Virtualization creates multiple virtual machines (VMs) on a single physical server. Each VM runs its own complete operating system and uses a hypervisor to manage hardware resources. Containerization packages applications with their dependencies into lightweight, portable containers that share the host operating system kernel. Here are the four deployment patterns: 1. Bare Metal Applications run directly on the physical server's operating system. No virtualization layer exists between the application and hardware. This provides maximum performance and lowest latency, but offers limited isolation and harder resource management. 2. Virtual Machines A hypervisor creates multiple VMs on one physical server. Each VM includes a complete guest operating system, consuming significant memory and CPU overhead. VMs provide strong isolation between workloads but require more resources and longer startup times. 3. Containers A container runtime (like Docker) runs containers that share the host OS kernel. Containers include only the application and its dependencies, not a full operating system. This makes them lightweight, fast to start, and resource-efficient compared to VMs. 4. Containers on VMs Containers run inside virtual machines, combining both technologies. The VM provides hardware-level isolation while containers enable efficient application packaging. This hybrid approach is popular in cloud environments where you need both security isolation and operational efficiency. -- Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/496keA7 #systemdesign #coding #interviewtips
790
15 Comments -
Praveen Singampalli
Interviewer: You don't know Linux interview questions, how can i select you? Candidate: I may have missed a few questions, but I'm confident to learn Interviewer: How will you learn? Candidate: I have taken the DevOps mentorship with Praveen and classes are starting this week https://lnkd.in/gxUgNhpd Interviewer: Do you think taking a course you can crack the same interview? Candidate: I might or might not miss few questions, but I will be more confident -> Linux Basics (Beginner Level) What is Linux and how is it different from Unix? What is the difference between Linux and Windows? What is the Linux file system hierarchy? What is a shell? What are some common types of shells? What are runlevels in Linux? How do you check the current working directory? How do you view hidden files? What is the difference between > and >>? How do you search for a string in a file? How do you check memory usage? What command shows disk usage per directory? How do you find a file by name? What is cron and how do you schedule a job? What is the use of top, ps, and kill commands? -> Intermediate Linux Interview Questions How do you check which process is using the most CPU/memory? What’s the difference between nice and renice? What’s the difference between kill, pkill, and killall? How do you analyze and troubleshoot a system that is running slowly? How do you check logs for a failed service? What is the use of netstat, ss, and lsof? How do you check if a port is open and listening? Difference between scp and rsync? How do you configure a static IP address? How do you flush DNS cache? How do hard links differ from soft links? How do you mount and unmount file systems? -> Advanced Linux Interview Questions What happens during the Linux boot process (BIOS → GRUB → init → systemd)? What is initrd or initramfs? What are cgroups and namespaces? How do they work? How does Linux handle process scheduling? Security & Access What is SELinux/AppArmor? How do you troubleshoot permission issues related to them? How do you securely copy files between Linux systems? How do you set up SSH key-based authentication? Performance & Tuning How do you identify and resolve a memory leak? What tools would you use for profiling a system under heavy load? How do you analyze I/O wait using tools like iostat, vmstat, or iotop? What are kernel modules? How do you manage them? How do you tune kernel parameters with sysctl? Bonus: Real-World Scenario-Based Questions A process is stuck in uninterruptible sleep (D state). How do you investigate it? A cron job isn’t running. What steps would you take to debug it? A service fails to start after a reboot. How do you trace the problem? I am teaching the complete linux with devops with cloud here with job mentorship Register here- https://lnkd.in/gxUgNhpd [Live Classes starting 3 days] Follow Praveen Singampalli for more such content ❣️ PS: Do share this post with your DevOps/ Linux learning friends
459
11 Comments -
AKVA
If MAANG or Tier-1 is your goal, this Cloud Engineer roadmap is non-negotiable. Everyone wants to break into Cloud and DevOps roles in MAANG and Tier 1 Companies, but what people lack is an exact roadmap to follow. We hold your back and have created a roadmap to becoming a Cloud Engineer at MAANG & Tier 1 Companies in our latest AKVA Newsletter. The Newsletter covers 1. Master DSA & System Design: You should definitely understand DSA and the basics of system design for clearing the first round in any MAANG Company. They ask for it even if you are appearing for a developer, tester, or even a cloud engineer. 2. DevOps Foundations: Build a strong DevOps foundation, learn about Docker, Linux, CI/CD Pipelines, Shell scripting, and GitHub.These fundamentals will help you master any tools that you use in any MAANG Company. 3. Pick a Cloud Platform: Choose a cloud platform; you can choose any of the major cloud platforms and start practicing and learning on it. 4. Infrastructure as Code (IaC): Learn about any IAC tool like Terraform, Pulumi, or Cloudformation to automate your infrastructure with code. 5. Programming & Scripting: At least build a strong coding skill knowledge in any of the programming languages like Python, Go, or Bash Scripting. This is where the real cloud engineer automates things by using these powerful languages. 6. Observability: Learn to monitor, log, and trace distributed systems. Tools like Prometheus, Grafana, and ELK are crucial for breaking into this role. 7. AI + DevOps: AI is reshaping every industry, and Cloud/DevOps is no exception. Learning how it impacts these roles today is what separates the smartest candidates from others. You can access all the free resources link to each Roadmap step here: https://lnkd.in/dtiqaA9G For more detailed and technical Newsletter don't forget to follow us on: https://lnkd.in/dYJ8NKVy Team AKVA
201
6 Comments -
Hussein Nasser
Each TCP connection provides a single ordered stream of bytes. Requests sent on stream will arrive in the order they are sent. Serially. It doesn’t necessarily mean that requests will be serially. The backend may choose to read all those at once and delegate each request to a thread. However if a single byte in first request is lost, all requests behind it are blocked even if their bytes have arrived. because t violates the order of bytes.
350
14 Comments -
Cloudairy
𝐇𝐨𝐰 𝐂𝐈𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 𝐰𝐨𝐫𝐤𝐬 𝐢𝐧 𝐀𝐖𝐒❗ AWS DevOps and CI/CD pipelines are the driving force behind achieving agile development and seamless software delivery. 🔗 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐂𝐈/𝐂𝐃 𝐰𝐢𝐭𝐡 𝐀𝐖𝐒❓ CI/CD, which stands for Continuous Integration and Continuous Deployment, is an automated approach that helps developers easily integrate code changes and deploy them to production. AWS offers a number of tools, including CodeCommit, CodeDeploy, and AWS CodePipeline, to guarantee that your software is always prepared for quick deployment with small updates. 🛠 𝐇𝐨𝐰 𝐃𝐨𝐞𝐬 𝐚 𝐂𝐈/𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 𝐖𝐨𝐫𝐤 𝐨�� 𝐀𝐖𝐒❓ Continuous Integration (CI): 🎯 Developers create and commit code to AWS CodeCommit, a fully managed source control service. 🎯 AWS CodeBuild automatically compiles, tests, and packages the code to ensure everything is in place. Continuous Deployment (CD): 🎯 Once the code passes the CI phase, AWS CodePipeline ensures it’s ready for deployment. 🎯 AWS CodeDeploy automatically deploys the code to the target environments, such as EC2, ECS, or Lambda. ⚙️ 𝐊𝐞𝐲 𝐂𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬 𝐨𝐟 𝐚𝐧 𝐀𝐖𝐒 𝐂𝐈/𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞: ✅ Source Control Management (SCM): AWS CodeCommit is used for version control and storing code in a secure, scalable Git-based repository. ✅ Build Tools: AWS CodeBuild is a managed build service that compiles the source code, runs tests, and produces artifacts. ✅ Artifact Repositories: Amazon S3 or AWS CodeArtifact is used for storing build artifacts, Docker images, and application binaries, ensuring they are readily available for deployment. ✅ Deployment Tools: AWS CodeDeploy automates deployments to various services, including Amazon EC2 instances, ECS containers, and Lambda functions. 🌟 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 𝐨𝐟 𝐀𝐖𝐒 𝐂𝐈/𝐂𝐃: ✅ Faster Delivery: Smaller, frequent releases with CodePipeline accelerate feature updates and bug fixes. ✅ Enhanced Collaboration: AWS DevOps promotes collaborative development, enabling developers to work on different features without conflict, leading to more effective and harmonious teamwork. 𝐂𝐥𝐨𝐮𝐝𝐚𝐢𝐫𝐲: 𝐓𝐡𝐞 𝐀𝐥𝐥-𝐢𝐧-𝐎𝐧𝐞 𝐀𝐈 𝐖𝐨𝐫𝐤𝐬𝐩𝐚𝐜𝐞 𝐟𝐨𝐫 𝐂𝐫𝐞𝐚𝐭𝐢𝐨𝐧, 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧, 𝐚𝐧𝐝 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬 ✔️ Create, collaborate, and gain insights in one unified platform. ✔️ Generate diagrams, documents, slides, and visual analytics with AI—and refine and deliver in one place. ✔️ Collaborate in real time across workspaces and teams with strict permission controls. ✔️ Start fast with 1000+ templates and thousands of icons for engineering, product, and strategy use cases. ✔️ Scale confidently with SSO and enterprise-ready security. 𝐒𝐢𝐠𝐧 𝐮𝐩 𝐟𝐨𝐫 𝐂𝐥𝐨𝐮𝐝𝐚𝐢𝐫𝐲 𝐟𝐨𝐫 𝐟𝐫𝐞𝐞 𝐭𝐨𝐝𝐚𝐲: https://lnkd.in/ehpw45qP 𝐑𝐞𝐪𝐮𝐞𝐬𝐭 𝐚 𝐋𝐢𝐯𝐞 𝐃𝐞𝐦𝐨: https://lnkd.in/eNPHW6at Source Image: AWS blogs #cloudcomputing #aws #devops #kubernetes #cloudairy
188
6 Comments -
Arpit Bhayani
All databases, in fact, all TCP servers, are susceptible to a connection storm. Let's dig deeper to understand what it is and how to handle it. A TCP connection storm occurs when applications rapidly open and close a large number of database connections, overwhelming the server's connection handling capacity. The impact can be severe, leading to resource exhaustion (each connection takes up 2-8 MB), CPU thrashing during connection handshakes, internal lock contention (in the case of multi-threaded handling), high memory fragmentation, or even downtime. If your database is distributed and the master faces a connection storm, it may put your database in an inconsistent state, and it can be really tricky to bring it back to consistency. A few ways to handle this at the OS level include configuring iptables rate limiting, which limits new connections per IP. You can also tune the TCP stack by updating the `/etc/sysctl` file and configuring SYN parameters and the connection queue. All major databases also expose TCP-related parameters like backlog size, pool size, timeouts, etc., so review the configurations and tune your parameters accordingly. You can also add a second line of defense with a database proxy (e.g., ProxySQL, PgBouncer, etc.). A good practice is always to monitor the connection count on your database instance. Depending on the database you use, there are ways to gather these metrics, so proactively monitor them and set up alerts. Some examples are: - `pg_stat_activity` in Postgres - `threads_connected` metric in MySQL This will help you prevent outages due to a connection storm and respond in time. By the way, managing databases is fun :) Also, you now have the words you need to dig deeper. If this interests you, go down the rabbit hole - it's going to be a fun ride. btw, enrollments are open for my august sys design cohort (and about 14 seats left), filled with no-fluff and highly practical engineering discussions aimed at making you a better engineer - arpitbhayani.me/course
447
19 Comments -
Salvatore Sanfilippo
New EN video: Inspecting LLM embeddings in GGUF format with gguflib.c and Redis. So, as you probably know, many large language models are distributed using files with this extension, GGUF. They are very large files usually, and inside such files, there are tensors that are basically the weights of neural networks: the attention weights, the feed-forward weights, the token embedding weights. And this is the format used by the lama.cpp project and all the other projects that are based on lama.cpp, providing less or more credit to lama.cpp. [the full video in the comments]
110
1 Comment -
Mudassir Mustafa
This team spent 3 weeks debugging a mysterious DNS issue in their Kubernetes staging environment. The symptoms were maddening: → Metrics looked completely normal → Pods were running smoothly → Logs showed zero errors → But certain services intermittently failed to resolve names 3 weeks of investigation. Multiple senior engineers pulled in. Deep dives into networking configurations. Packet captures analyzed. Service mesh debugging sessions. Theories about load balancer issues. The culprit? kube-dns had a default CPU limit set too low. Under moderate load, it would throttle without throwing any alerts or writing error logs. Just... silent failures. This perfectly captures the hidden complexity trap of modern infrastructure and why our current tooling is not scaling in the age of AI. Defaults that seem reasonable but fail under real-world conditions. Critical services that degrade gracefully until they don't. No observability into the actual problem. Debugging and RCA takes hours. Your DNS resolver was quietly dying, but all your monitoring said everything was fine. 3 weeks of engineering time lost to a CPU limit that nobody questioned because it was the "recommended default." Modern infrastructure hides its failures behind layers of abstraction. The error messages don't exist. The metrics look healthy. The logs are clean. But your applications randomly break. How many weeks have you spent debugging "impossible" problems?
74
29 Comments -
Harisha Lakshan Warnakulasuriya(BSc.(ousl))
🔹 1. Foundational Patterns These are the core structural patterns used in almost every Kubernetes app. Sidecar Pattern Add a secondary container to the same Pod to enhance or extend the primary container’s capabilities. Example: Log shipper, metrics collector, or proxy. Ambassador Pattern A helper container that acts as a proxy between your app and an external service. Example: Connecting to databases securely or through a specific network path. Adapter Pattern Converts or adapts the output of the main container into a format understandable by external tools. Example: Format log output to match Prometheus expectations. Init Container Pattern Run one or more setup containers before the main app container starts. Example: Initializing DB schema or pulling config from external sources. 🔹 2. Behavioral Patterns These define how containers should behave at runtime. Self-Healing Pattern Use livenessProbe and readinessProbe to automatically restart unhealthy containers or delay traffic routing until ready. Leader Election Pattern Ensure one active leader among replicas for coordination or stateful tasks. Used in databases, message brokers, etc. Work Queue Pattern Distribute tasks across multiple worker Pods processing from a queue (like RabbitMQ, Kafka, etc.). Batch Job Pattern Use Job or CronJob resources to run one-off or scheduled workloads (e.g., data cleanup, report generation). 🔹 3. Structural Patterns Used to control how containers and resources are organized. Multi-Container Pod Pattern Multiple containers working together inside the same Pod, communicating via localhost or shared volume. Init Containers (again) Structurally separate initialization logic from runtime logic. 🔹 4. Configuration Patterns Handle application configuration and secrets securely. Configuration Resource Pattern Use ConfigMaps and Secrets to inject config into containers without baking them into images. Environment Variable Pattern Inject configuration values as environment variables in your container spec. Volume Configuration Pattern Mount ConfigMaps or Secrets as volumes into containers. 🔹 5. Advanced Deployment Patterns For controlled rollout, updates, and scalability. Rolling Update Pattern Gradually replace old versions with new ones while maintaining service availability. Blue-Green Deployment Pattern Deploy new version alongside the old one, switch traffic when ready. Canary Deployment Pattern Route a small percentage of traffic to the new version before full rollout. Shadow Deployment Pattern Deploy new version in parallel and mirror live traffic to it for testing (but no real impact). 🔹 6. Observability and Monitoring Patterns Log Aggregation Pattern Collect logs from all containers and ship to a central system (e.g., ELK, Fluentd). Metrics Collection Pattern Expose metrics endpoints and use Prometheus/Grafana for monitoring. Tracing Pattern Track end-to-end requests across microservices (e.g., Jaeger, Zipkin).
40
-
Sagar Gulabani
AWS was randomly terminating our instances or atleast we thought so because we didn't know any better. In our EKS cluster, we had a managed node group and as we all know, it uses an AWS Autoscaling group underneath to provision instances. We had setup Cluster Autoscaler to manage the size of the autoscaling group. We had some long running jobs running on our cluster, which were scheduled to run on these instances. Whenever these jobs were scheduled, the Cluster Autoscaler would scale up the ASG and we got an instance on which this job would run. The problem was that these jobs were getting randomly terminated and when we would check the kubectl events we saw that the underlying node was getting terminated. First we thought that Cluster Autoscaler was responsible for doing it. We checked the Cluster Autoscaler logs but didn't find anything there. We read some Cluster Autoscaler documentation and figured that we could apply an annotation to the pod cluster-autoscaler/safe-to-evict: "false". This would block Cluster Autoscaler from evicting the pod during any consolidation process it might perform. We went ahead and applied that annotation. But after a couple of days, we were still running into the same issue. So we did some digging. And after some digging, we decided to check the AWS Autoscaling Group autoscaling activities. And there it was clearly written - Instances were launched to balance instances in zone ap-south-1a and ap-south-1b. Instance has been taken out of service in response to difference between desired and actual size. And here we found our culprit. AWS was trying to balance the number of instances in the two availability zones which is a good thing to do if you have proper stateless workloads and pod disruption budgets in place. It helps in keeping your app highly available. But in this case, the job was a once off thing and it had to run reliably. AZ Rebalancing was not really required. AWS Autoscaling Group allows you to suspend AZ Rebalancing. After suspending the process, we didn't run into this issue anymore. Do you think AWS randomly terminates your instances ? It might not really be random after all. If you have any stories where AWS was randomly terminating your instances and you didn't know why, share it below.
41
5 Comments -
Moustafa ElZeny
Linux Before DevOps Ever wonder why some automations just “click” while others break the minute something’s off? Here’s a hard truth most won’t admit: You can’t “do DevOps” if you don’t speak Linux. Pipelines? All those shiny tools you love: -> Jenkins, GitLab CI, Terraform, Ansible run on Linux agents. -> Ansible playbooks? Just SSH scripts talking Linux. -> CI/CD? Still just clever orchestration of Linux commands. -> Even your Docker containers? Doing Linux syscalls behind your back. If “ps”, “chmod”, “grep”, and “ls” feel like hieroglyphics, you’re not building systems, just decorating them. 👉 DevOps isn’t a stack of tools. DevOps is a culture and it sits on a foundation called “Linux.” 🏆 The fastest-rising DevOps engineers? They know exactly how the OS thinks before touching a single playbook. How to stand out? 30 minutes a day in the terminal: break, fix, repeat. Start micro-projects: set up Nginx, automate backups, analyze logs. Chase errors. Learn from the “why,” not just the “how.” “DevOps without Linux is automation without foundation.” --- ليه لينوكس قبل الـ DevOps؟ الكل بيجري ورا كلمة DevOps كأنها عصا سحرية… Pipelines وTerraform وJenkins وAnsible. بس السؤال: لو مش فاهم Linux… إيه اللي بالظبط بتـ Automate؟ الـ Pipeline في الآخر Agent شغال على Linux. الـ Playbook في Ansible مش غير أوامر بتتشقلب وتتنفذ على سيرفر لينوكس. حتى الـ Docker Build اللي في نص الـ Pipeline = مجرد System calls للـ Kernel. CI/CD من غير معرفة بالـ OS = زي اللي بيسوق عربية ومش فاهم يعني إيه موتور. DevOps مش Tool… DevOps = ثقافة + أساس. والأساس دا اسمه Linux. 🐧 اتعلم لينوكس الأول… بعد كده الـ DevOps يبقى سلاح حقيقي في إيدك مش مجرد Buzzword. What's YOUR one must-know Linux command for infrastructure automation? Drop it below 👇 and if this hit home, repost & tag a friend who’s levelling up! 🚀 #RedHatAccelerators #Linux #DevOps #Automation #OpenSource #CI #CICD #SysAdmin #PlatformEngineering #Containers #Infrastructure #Cloud #EngineeringCulture #ContinuousLearning #TeamWork #Ubuntu #Fedora #RedHat #Docker #Ansible #Kubernetes
57
8 Comments -
Piyush sachdeva
Stop paying thousands of dollars for Kubernetes. ❌ While K8s is incredibly powerful, it often leads to significant cloud cost overruns due to inefficient resource allocation. My latest video provides a deep dive into how you can reclaim control over your budget using automated optimization with Cast AI. I will show you the hands-on demo to dramatically reduce your expenses. Key insights from the session: → Provision an EKS Cluster with CastAI installed → How to get an instant analysis of your cluster's cost-saving potential (more than 70%). → A live demonstration of automated cluster rebalancing for optimal node selection. → The strategy behind leveraging spot instances and right-sizing to cut compute costs without sacrificing stability. Ready to transform your cloud cost management? Watch the full demo here: https://lnkd.in/dvsETe7g What are your tips to cut down Kubernetes cost? Drop in the comments • • • 🔔 Follow me (Piyush sachdeva) for more real talk on DevOps and cloud! ♻️ Share this post to help someone else level up their learning. ✍ Sign up for my FREE AI newsletter to stay updated with AI tools and advancements (DailyAIScoops). #Kubernetes #FinOps #CloudCost #DevOps #CastAI
154
4 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content