<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[DevOps Detours: Your Guide to Modern DevOps Practices]]></title><description><![CDATA[DevOps Detours: Insights, tutorials, and solutions on Kubernetes, CI/CD, cloud platforms, and automation to optimize workflows and boost engineering expertise.]]></description><link>https://devopsdetours.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 11:55:47 GMT</lastBuildDate><atom:link href="https://devopsdetours.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Up and Running with kubectl-ai [PDF]]]></title><description><![CDATA[Download

💡
This documents is optimized for mobile view.]]></description><link>https://devopsdetours.com/up-and-running-with-kubect-ai-pdf</link><guid isPermaLink="true">https://devopsdetours.com/up-and-running-with-kubect-ai-pdf</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[kubectl-ai]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Wed, 30 Jul 2025 17:56:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753898584930/01739f18-1331-4d48-b709-fc2750685607.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<iframe src="https://drive.google.com/file/d/1uccNY4XQ542FyjvMb2I_yHPvfgtMo7as/preview" width="640" height="480"></iframe>

<p><a target="_blank" href="https://drive.usercontent.google.com/download?id=1uccNY4XQ542FyjvMb2I_yHPvfgtMo7as&amp;export=download&amp;authuser=0">Download</a></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">This documents is optimized for mobile view.</div>
</div>]]></content:encoded></item><item><title><![CDATA[How to deal with DNS caches [PDF]]]></title><description><![CDATA[Learn to deal with DNS caches as efficiently as possible.


Download


💡
This documents is optimized for mobile view.]]></description><link>https://devopsdetours.com/how-to-deal-with-dns-caches-pdf</link><guid isPermaLink="true">https://devopsdetours.com/how-to-deal-with-dns-caches-pdf</guid><category><![CDATA[dns]]></category><category><![CDATA[cache]]></category><category><![CDATA[Cache Invalidation]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Mon, 21 Jul 2025 12:33:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753101148740/9721c8d4-86d5-4ec8-b4f7-c317f7ccfe56.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Learn to deal with DNS caches as efficiently as possible.</p>
<iframe src="https://drive.google.com/file/d/1J45Qg2tfhsyp0dBscvqmayEDzq_IhXqw/preview" width="640" height="480"></iframe>

<p><a target="_blank" href="https://drive.usercontent.google.com/download?id=1J45Qg2tfhsyp0dBscvqmayEDzq_IhXqw&amp;export=download&amp;authuser=0">Download</a></p>
<blockquote>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">This documents is optimized for mobile view.</div>
</div></blockquote>
]]></content:encoded></item><item><title><![CDATA[Are Kubernetes Secrets Really Secure? [PDF]]]></title><description><![CDATA[Download

💡
This documents is optimized for mobile view.]]></description><link>https://devopsdetours.com/are-kubernetes-secrets-really-secure</link><guid isPermaLink="true">https://devopsdetours.com/are-kubernetes-secrets-really-secure</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[secrets]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Sat, 12 Jul 2025 08:21:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752308392314/25995822-24ef-4fab-afa4-f88a806d9e89.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<iframe src="https://drive.google.com/file/d/1R-vnacUqpC-nnBcMIPA3QTuEEf7qXjVH/preview" width="640" height="480"></iframe>

<p><a target="_blank" href="https://drive.usercontent.google.com/download?id=1R-vnacUqpC-nnBcMIPA3QTuEEf7qXjVH&amp;export=download&amp;authuser=0">Download</a></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">This documents is optimized for mobile view.</div>
</div>]]></content:encoded></item><item><title><![CDATA[envsubst: The simplest bash variable substitution tool]]></title><description><![CDATA[🧐 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗲𝗻𝘃𝘀𝘂𝗯𝘀𝘁?envsubst (GNU gettext) streams stdin ➜ stdout, replacing any $VAR with their environment values. Think sed for env-vars, no regex.
🚀 𝗪𝗵𝗲𝗻 𝗱𝗼𝗲𝘀 𝗶𝘁 𝘀𝗵𝗶𝗻𝗲?• Container start-up configs: inject secrets or ...]]></description><link>https://devopsdetours.com/the-simplest-bash-variable-substitution-tool</link><guid isPermaLink="true">https://devopsdetours.com/the-simplest-bash-variable-substitution-tool</guid><category><![CDATA[Bash]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Wed, 18 Jun 2025 14:39:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750257567638/2be4aec7-4a80-420e-afb6-a5453e3896fb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🧐 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗲𝗻𝘃𝘀𝘂𝗯𝘀𝘁?<br />envsubst (GNU gettext) streams stdin ➜ stdout, replacing any $VAR with their environment values. Think <em>sed</em> for env-vars, no regex.</p>
<p>🚀 𝗪𝗵𝗲𝗻 𝗱𝗼𝗲𝘀 𝗶𝘁 𝘀𝗵𝗶𝗻𝗲?<br />• Container start-up configs: inject secrets or hostnames into nginx.conf<br />• Quick local scripting: stamp today’s date or $HOME into a file in 2 lines<br />• CI/CD pipelines: render YAML/INI/JSON per-environment without Helm/Jinja</p>
<p>👨‍💻 𝗖𝗼𝗱𝗲:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> IP=$(curl -s https://api.ipify.org)
envsubst <span class="hljs-string">'$IP'</span> &lt; hostname.tmpl &gt; hostname.conf
</code></pre>
<p>ℹ️ 𝗘𝘅𝗽𝗹𝗮𝗻𝗮𝘁𝗶𝗼𝗻:</p>
<p>1️⃣ curl fetches my public IP address and stores it in the IP environment variable.</p>
<p>2️⃣ envsubst reads the template file, replaces every instance of $IP with that value, and saves the finished file as hostname.conf.</p>
<p>🎞️ Check the 14-second demo below to watch it happen in real time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750256904060/76deb445-bbd8-46b2-a57c-2a37e80b6617.gif" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[AWS NAT Cost Reduction Strategies]]></title><description><![CDATA[Navigating AWS NAT options isn’t just about plugging a connectivity hole – it’s about balancing cost, reliability, and complexity for your specific needs. Here’s a quick recap and comparison to help guide decision-making.

TL;DR

Already dual-stack /...]]></description><link>https://devopsdetours.com/aws-nat-cost-reduction-strategies</link><guid isPermaLink="true">https://devopsdetours.com/aws-nat-cost-reduction-strategies</guid><category><![CDATA[AWS]]></category><category><![CDATA[NAT Gateway]]></category><category><![CDATA[nat-instance]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Mon, 28 Apr 2025 15:11:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745853015392/4f3ee15c-59ed-4724-a59c-f5b0c159af2a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Navigating AWS NAT options isn’t just about plugging a connectivity hole – it’s about balancing <strong>cost, reliability, and complexity</strong> for your specific needs. Here’s a quick recap and comparison to help guide decision-making.</p>
<hr />
<h3 id="heading-tldr">TL;DR</h3>
<ul>
<li><p><strong>Already dual-stack / IPv6-only?</strong> → Use an <strong>Egress-Only Internet Gateway</strong> (no NAT fees).</p>
</li>
<li><p><strong>More than 70 % of bytes go to AWS services?</strong> → Shift to <strong>Gateway/Interface Endpoints</strong> and bypass NAT entirely.</p>
</li>
<li><p><strong>Low-duty or bursty dev/test traffic?</strong> → Spin up a <strong>Disposable NAT Gateway</strong>.</p>
</li>
<li><p><strong>Are you okay with running</strong> and <strong>patching EC2?</strong> → Use a <strong>NAT instance</strong>, <strong>Fck-NAT AMI, or</strong> Spot-based <strong>alterNAT</strong>.</p>
</li>
<li><p><strong>Need ≥99.9 % HA with minimal ops?</strong> → Stick with a <strong>Managed NAT Gateway</strong>.</p>
</li>
<li><p>If none of the above fit, <strong>re-evaluate</strong> your requirements.</p>
</li>
</ul>
<hr />
<h3 id="heading-ifttnat-if-this-then-nat-decision-tree">IFTTNAT (IF-This-Then-NAT) Decision Tree</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745909836724/59d915e6-d910-4fe4-b50a-1adf3b53473c.gif" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-aws-managed-nat-gateway-baseline"><strong>AWS Managed NAT Gateway (Baseline)</strong></h2>
<p>Easiest to deploy (fully managed), highly available within an AZ, and scales to high throughput without user intervention​. However, it introduces significant cost at scale (data processing fees) and provides no fine-grained control (no security groups on it, idle costs accumulate)​. Best for teams that value <strong>simplicity and uptime</strong> over cost, or when traffic levels are low enough that fees aren’t a concern. Common assumption: “It’s AWS-managed so it must be the best choice” – which is true for hassle-free operation, but as we’ve seen, it can be overkill or too costly for some scenarios.</p>
<h3 id="heading-use-case-diagrams"><strong>Use Case Diagrams</strong></h3>
<p><strong>Public Connectivity:</strong></p>
<p>Access the internet from a private subnet</p>
<p><img src="https://docs.aws.amazon.com/images/vpc/latest/userguide/images/public-nat-gateway-diagram.png" alt="A VPC with public and private subnets, a NAT gateway, and an internet gateway." /></p>
<p><strong>Private Connectivity:</strong></p>
<p><strong>Access your network using allow-listed IP addresses</strong></p>
<p>The following diagram shows how instances can access on-premises resources through AWS VPN. Traffic from the instances is routed to a virtual private gateway, over the VPN connection, to the customer gateway, and then to the destination in the on-premises network. However, suppose that the destination allows traffic only from a specific IP address range, such as 100.64.1.0/28. This would prevent traffic from these instances from reaching the on-premises network.</p>
<p><img src="https://docs.aws.amazon.com/images/vpc/latest/userguide/images/allowed-range.png" alt="Access to an on-premises network using an AWS VPN connection." /></p>
<p>The following diagram shows the key components of the configuration for this scenario. The VPC has its original IP address range plus the allowed IP address range. The VPC has a subnet from the allowed IP address range with a private NAT gateway. Traffic from the instances that is destined for the on-premises network is sent to the NAT gateway before being routed to the VPN connection. The on-premises network receives the traffic from the instances with the source IP address of the NAT gateway, which is from the allowed IP address range.</p>
<p><img src="https://docs.aws.amazon.com/images/vpc/latest/userguide/images/private-nat-allowed-range.png" alt="VPC subnet traffic routed through private NAT gateway" /></p>
<p><strong>Enable communication between overlapping networks</strong></p>
<p>You can use a private NAT gateway to enable communication between networks even if they have overlapping CIDR ranges. For example, suppose that the instances in VPC A need to access the services provided by the instances in VPC B.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745846337545/0108b5b9-bfc5-41df-8e05-611af6fda1d4.png" alt class="image--center mx-auto" /></p>
<p>The following diagram shows the key components of the configuration for this scenario. First, your IP management team determines which address ranges can overlap (non-routable address ranges) and which can't (routable address ranges). The IP management team allocates address ranges from the pool of routable address ranges to projects on request.</p>
<p>Each VPC has its original IP address range, which is non-routable, plus the routable IP address range assigned to it by the IP management team. VPC A has a subnet from its routable range with a private NAT gateway. The private NAT gateway gets its IP address from its subnet. VPC B has a subnet from its routable range with an Application Load Balancer. The Application Load Balancer gets its IP addresses from its subnets.</p>
<p>Traffic from an instance in the non-routable subnet of VPC A that is destined for the instances in the non-routable subnet of VPC B is sent through the private NAT gateway and then routed to the transit gateway. The transit gateway sends the traffic to the Application Load Balancer, which routes the traffic to one of the target instances in the non-routable subnet of VPC B. The traffic from the transit gateway to the Application Load Balancer has the source IP address of the private NAT gateway. Therefore, response traffic from the load balancer uses the address of the private NAT gateway as its destination. The response traffic is sent to the transit gateway and then routed to the private NAT gateway, which translates the destination to the instance in the non-routable subnet of VPC A.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745596573507/a64b029d-528e-430c-998f-09ddf005ab64.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-pricing">Pricing</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Price per NAT gateway ($/hour)</td><td>Price per GB data processed ($)</td></tr>
</thead>
<tbody>
<tr>
<td>$0.052</td><td>$0.052</td></tr>
</tbody>
</table>
</div><h3 id="heading-links">Links</h3>
<p>NAT Gateway Scenarios:</p>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html">https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html</a></li>
</ul>
<p><strong>Create a NAT gateway:</strong></p>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-working-with.html#nat-gateway-creating">https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-working-with.html#nat-gateway-creating</a></li>
</ul>
<hr />
<h2 id="heading-nat-instance-diy"><strong>NAT Instance (DIY)</strong></h2>
<p>Essentially free of AWS surcharges – you pay for an EC2 and standard bandwidth, making it much cheaper for high outbound traffic​. You also get more control (security groups, custom AMI possibilities). The trade-off is you manage it: patching, scaling, and handling failover in case of issues​. Suitable for <strong>cost-sensitive</strong> environments and when you have the expertise to manage the instance. It’s a building block for other solutions like fck-nat and alterNAT. Be ready to address the single-point-of-failure problem, either by accepting short downtimes or implementing your own redundancy.</p>
<h3 id="heading-diagram">Diagram</h3>
<p><img src="https://docs.aws.amazon.com/images/vpc/latest/userguide/images/nat-instance_updated.png" alt="Diagram showing the setup of a NAT instance in a VPC" /></p>
<h3 id="heading-pricing-1">Pricing</h3>
<p>Example:</p>
<p>t2.micro Instance + egress fee + EIP costs for 1 hour:</p>
<p>$0.0134/hour + $0.09/GB + $0.005 = $0.1084/h</p>
<h3 id="heading-link">Link</h3>
<p>How to do it yourself:</p>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/work-with-nat-instances.html">https://docs.aws.amazon.com/vpc/latest/userguide/work-with-nat-instances.html</a></li>
</ul>
<hr />
<h2 id="heading-fck-nat-feasible-cost-konfigurable-nat-an-aws-nat-instance-ami"><strong>fck-nat (Feasible cost konfigurable NAT: An AWS NAT Instance AMI)</strong></h2>
<p>A convenient improvement on the NAT instance approach. You get up-to-date, optimized NAT appliances you can launch with minimal effort. This drastically lowers deployment complexity for NAT instances and ensures you’re not running outdated software. Cost benefits mirror the NAT instance (no per-GB fee, can use tiny cheap instances). The main caution is that it’s a community project; while popular, it’s not an AWS native service, so you assume responsibility for it (albeit a small one). Great for <strong>small-to-medium workloads in production</strong>, and for development setups where you want to eliminate that ~$30/mo per environment NAT Gateway cost. Think of fck-nat as “NAT Instance 2.0” – easier and safer than rolling your own from scratch.</p>
<h3 id="heading-pricing-2">Pricing</h3>
<p>Please refer to the docs for detailed pricing:</p>
<p><a target="_blank" href="https://github.com/AndrewGuenther/fck-nat/blob/main/docs/choosing_an_instance_size.md">https://github.com/AndrewGuenther/fck-nat/blob/main/docs/choosing_an_instance_size.md</a></p>
<h3 id="heading-links-1">Links</h3>
<ul>
<li><p>Official Site: <a target="_blank" href="https://fck-nat.dev/v1.3.0/">https://fck-nat.dev/v1.3.0/</a></p>
</li>
<li><p>Deploy with Terraform: <a target="_blank" href="https://fck-nat.dev/v1.3.0/deploying/#terraform">https://fck-nat.dev/v1.3.0/deploying/#terraform</a></p>
</li>
<li><p>Deploy with CloudFormation: <a target="_blank" href="https://fck-nat.dev/v1.3.0/deploying/#cloudformation">https://fck-nat.dev/v1.3.0/deploying/#cloudformation</a></p>
</li>
<li><p>Manual Deployment: <a target="_blank" href="https://fck-nat.dev/stable/deploying/#manual-web-console">https://fck-nat.dev/stable/deploying/#manual-web-console</a></p>
</li>
<li><p>GitHub Repo: <a target="_blank" href="https://github.com/AndrewGuenther/fck-nat">https://github.com/AndrewGuenther/fck-nat</a></p>
</li>
</ul>
<hr />
<h2 id="heading-alternat-ha-nat-instance-setup"><strong>alterNAT (HA NAT Instance Setup)</strong></h2>
<p>A robust solution aimed at larger-scale or mission-critical use of NAT instances. It tackles the reliability concerns by introducing managed failover to NAT Gateways​ and automates maintenance so your NAT instances don’t become pets you forget to update​. It’s ideal for <strong>high-volume traffic scenarios (tens of TB per month)</strong> where NAT Gateway fees are exorbitant, but you cannot tolerate prolonged outages. The complexity is higher – essentially you’re running a mini service within your infrastructure – but Terraform scripts make it deployable if you’re invested in that ecosystem. alterNAT invites teams to rethink the assumption that “only AWS can provide HA”; it shows you <em>can</em> design a highly available NAT solution yourself​. Use it when you have a clear cost imperative and enough scale to justify the complexity. If your NAT Gateway costs are trivial, stick with simpler options.</p>
<h3 id="heading-diagram-1">Diagram</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745595591796/c20d11d9-7df7-4627-b5a1-1f023e331445.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-pricing-3">Pricing</h3>
<p>Please refer to to the project docs for the pricing:</p>
<p><a target="_blank" href="https://github.com/chime/terraform-aws-alternat/tree/main?tab=readme-ov-file#background">https://github.com/chime/terraform-aws-alternat/tree/main?tab=readme-ov-file#background</a></p>
<h3 id="heading-links-2">Links</h3>
<p>There are two ways to deploy alterNAT:</p>
<ul>
<li><p>By building a Docker image:</p>
<ul>
<li><a target="_blank" href="https://github.com/chime/terraform-aws-alternat?tab=readme-ov-file#building-and-pushing-the-container-image">https://github.com/chime/terraform-aws-alternat?tab=readme-ov-file#building-and-pushing-the-container-image</a></li>
</ul>
</li>
<li><p>Using Terraform:</p>
<ul>
<li><a target="_blank" href="https://github.com/chime/terraform-aws-alternat?tab=readme-ov-file#use-the-terraform-module">https://github.com/chime/terraform-aws-alternat?tab=readme-ov-file#use-the-terraform-module</a></li>
</ul>
</li>
</ul>
<p>GitHub Repo:</p>
<ul>
<li><a target="_blank" href="https://github.com/chime/terraform-aws-alternat">https://github.com/chime/terraform-aws-alternat</a></li>
</ul>
<hr />
<h2 id="heading-disposable-nat-gateway"><strong>Disposable NAT Gateway</strong></h2>
<p>A creative strategy by Shahin Hemmati for <strong>very intermittent needs</strong>. It’s the ultimate cost saver if you truly don’t need continuous internet connectivity – why pay for NAT 24/7 when you use it 2% of the time? By automating NAT Gateway creation on a schedule, you ensure you “only pay for NAT Gateway while it’s actually needed”​. The obvious limitation is that it doesn’t work for always-online services. It flips the usual assumption (“NAT must be always available”) and asks, what if it isn’t? Use this for controlled environments like nightly maintenance, offline batch processing, or ultra-secure setups where internet access is a gated event. It requires discipline and coordination, but when applicable, it can nearly eliminate NAT costs (and even provide security benefits by closing egress pathways most of the day)​.</p>
<h3 id="heading-diagram-2">Diagram</h3>
<p><img src="https://raw.githubusercontent.com/shahinam2/AWS-DevOps-Projects/main/06_Disposable_NAT_Gateway/readme-files/Disposable-NAT-GW-Diagram.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-pricing-4">Pricing</h3>
<p><strong>Cost Analysis of the Disposable NAT Gateway Solution:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Service</strong></td><td><strong>Usage per cycle</strong></td><td><strong>Price</strong></td><td><strong>Cost per cycle</strong></td></tr>
</thead>
<tbody>
<tr>
<td>NAT Gateway</td><td>1 hour</td><td>$0.052/hour</td><td>$0.052</td></tr>
<tr>
<td>Lambda</td><td>2 invocations, 128MB, 2 min each</td><td>$0.20 per 1M requests</td><td>Within the always free plan</td></tr>
<tr>
<td>EventBridge</td><td>2 scheduled events</td><td>$1 per 1M invocations</td><td>Within the always free plan</td></tr>
<tr>
<td>EIP</td><td>1 hour</td><td>$0.005/hour</td><td>$0.005</td></tr>
<tr>
<td>Data Processing</td><td>1 GB</td><td>$0.052/GB</td><td>$0.052</td></tr>
<tr>
<td><strong>Total cost per cycle</strong></td><td></td><td></td><td><strong>$0.109</strong></td></tr>
</tbody>
</table>
</div><p><strong>Cost Analysis of the Disposable NAT Instance Solution:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Service</strong></td><td><strong>Usage per cycle</strong></td><td><strong>Price</strong></td><td><strong>Cost per cycle</strong></td></tr>
</thead>
<tbody>
<tr>
<td>EC2 instance</td><td>1 hour</td><td>$0.0134/hour</td><td>$0.0134</td></tr>
<tr>
<td>Lambda</td><td>2 invocations, 128MB, 2 min each</td><td>$0.20 per 1M requests</td><td>Within the always free plan</td></tr>
<tr>
<td>EventBridge</td><td>2 scheduled events</td><td>$1 per 1M invocations</td><td>Within the always free plan</td></tr>
<tr>
<td>EIP</td><td>1 hour</td><td>$0.005/hour</td><td>$0.005</td></tr>
<tr>
<td>Data Transfer</td><td>1 GB</td><td>$0.09/GB</td><td>$0.09</td></tr>
<tr>
<td><strong>Total cost per cycle</strong></td><td></td><td></td><td><strong>$0.1084</strong></td></tr>
</tbody>
</table>
</div><h3 id="heading-links-3">Links</h3>
<p>Deploy using CloudFormation:</p>
<ul>
<li><a target="_blank" href="https://github.com/shahinam2/AWS-DevOps-Projects/tree/main/06_Disposable_NAT_Gateway#how-to-deploy-the-solution">https://github.com/shahinam2/AWS-DevOps-Projects/tree/main/06_Disposable_NAT_Gateway#how-to-deploy-the-solution</a></li>
</ul>
<p>GitHub Repo:</p>
<ul>
<li><a target="_blank" href="https://github.com/shahinam2/AWS-DevOps-Projects/tree/main/06_Disposable_NAT_Gateway">https://github.com/shahinam2/AWS-DevOps-Projects/tree/main/06_Disposable_NAT_Gateway</a></li>
</ul>
<hr />
<h2 id="heading-egress-only-internet-gateway-ipv6"><strong>Egress-Only Internet Gateway (IPv6)</strong></h2>
<p>The forward-looking approach. By adopting IPv6, you can avoid NAT altogether for a lot of traffic​. The egress-only IGW is <strong>free, high-performing, and simple</strong> – if you can use it. This strategy is best when you’re modernizing your network and can ensure both your apps and the services they call support IPv6. It won’t completely replace NAT Gateway until the rest of the world is on IPv6 (and you might still need NAT64 for IPv6-only instances to reach IPv4-only endpoints), but it can dramatically cut down NAT usage. It challenges the common mindset “IPv6 is optional in AWS.” As IPv6 adoption increases, ignoring egress-only gateways could mean literally turning away free bandwidth. Forward-thinking teams in <strong>Web, IoT, or cloud-native domains</strong> should definitely leverage this: let IPv6 carry as much as possible, and your NAT Gateways (or NAT instances) will only handle legacy IPv4 traffic​.</p>
<h3 id="heading-diagram-3">Diagram</h3>
<p><img src="https://docs.aws.amazon.com/images/vpc/latest/userguide/images/egress-only-igw.png" alt="Using an egress-only internet gateway" /></p>
<h3 id="heading-pricing-5">Pricing</h3>
<p>There is no charge for an egress-only internet gateway, but there are data transfer charges for EC2 instances that use internet gateways which is ~$0.09/GB depending on the region.</p>
<h3 id="heading-link-1">Link</h3>
<p>How to add an egress-only internet access to a subnet:</p>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway-working-with.html">https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway-working-with.html</a></li>
</ul>
<hr />
<h2 id="heading-comparison-table">Comparison Table</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Strategy</td><td>Cost Efficiency</td><td>Security</td><td>Scalability</td><td>Deployment Complexity</td><td>Best Use Case</td></tr>
</thead>
<tbody>
<tr>
<td>NAT Gateway (Baseline)</td><td>High cost, simple pricing; costly at scale due to per-GB fees</td><td>No security groups; NACLs and instance SGs apply</td><td>Auto scales to 100 Gbps; highly available within AZ</td><td>Very easy (AWS managed); zero config</td><td>Always-on services needing simplicity and uptime</td></tr>
<tr>
<td>Disposable NAT Gateway (On-Demand)</td><td>Extremely low; pay only during active schedule windows</td><td>Secure by limited access windows; reduces attack surface</td><td>Full NAT Gateway scale during active period; no scale outside window</td><td>Low; requires only schedules, public &amp; private subnet ID,s plus route table name</td><td>Intermittent egress needs like patching or batch jobs</td></tr>
<tr>
<td>NAT Instance (DIY)</td><td>Low cost, no per-GB fee; only EC2 hourly and bandwidth</td><td>Can use SGs; responsible for patching and updates</td><td>Limited by EC2 instance size; manual failover needed</td><td>Moderate; requires EC2 setup, routing, maintenance</td><td>Cost-sensitive workloads with moderate traffic</td></tr>
<tr>
<td>fck-nat (Pre-built AMIs)</td><td>Very low cost, pre-configured; uses cheap ARM instances</td><td>Improved security with modern AMIs; minimal upkeep</td><td>Up to 5 Gbps on t4g.nano; scales with instance size</td><td>Low; pre-configured AMIs reduce setup effort</td><td>Cost-efficient dev/test or small prod environments</td></tr>
<tr>
<td>alterNAT (HA NAT Instance Setup)</td><td>Medium; savings at scale but pays for standby NAT Gateways</td><td>Automated instance replacement and failover; patching handled</td><td>Per-AZ scaling with EC2 sizing; failover to NAT Gateway</td><td>High; Terraform setup with ASG, Lambda, route monitoring</td><td>High-traffic production environments needing HA</td></tr>
<tr>
<td>Egress-Only Internet Gateway (IPv6)</td><td>Free (no processing fee); only standard bandwidth charges</td><td>Blocks inbound by design; secure IPv6 egress only</td><td>Highly scalable; horizontally redundant by AWS</td><td>Low; simple setup but requires IPv6 readiness across the stack</td><td>Long-term IPv6 strategy or hybrid egress with IPv6 support</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>In practice, many organizations will use a <strong>combination</strong> of these strategies. For example, you might use fck-nat in dev/test, alterNAT in a high-traffic production environment, and enable IPv6 + egress-only IGW to offload part of the traffic, all at once. The AWS ecosystem allows you to mix and match per VPC or per subnet. The key is to <strong>evaluate your requirements and usage patterns</strong>. If uptime and simplicity trump everything, NAT Gateway is a fine choice – just be aware of the costs (and perhaps monitor them to catch any surprises). If cost is a major concern, explore these alternatives: you might save a lot more than you expect. Just remember that with greater control comes a bit more responsibility. As one engineer noted in a discussion about cutting NAT costs, AWS will always charge you for something; it’s on you to architect smartly around those charges​. In the end, the optimal solution will depend on your team’s tolerance for complexity, your traffic profile, and how much you value a few hundred milliseconds of failover time or a few hundred dollars of savings.</p>
<p>By constructively questioning the default assumptions – “NAT Gateway is the only way” or “managing infrastructure is too risky” – DevOps teams can arrive at a solution that best fits their <strong>cost goals and reliability needs</strong>. Whether it’s a lean EC2 NAT instance or a full-fledged HA setup with alterNAT, AWS gives us the building blocks to refine our egress architecture. The <strong>bottom line</strong>: don’t pay $75k in data processing fees if a weekend of engineering can save 40% of that​, and don’t hesitate to use multiple strategies as stepping stones (for instance, using fck-nat now and planning for IPv6 long-term). NAT in AWS is no longer one-size-fits-all – and that’s a good thing for the cloud community’s pocketbook and architecture creativity.</p>
<p>Note:</p>
<p>The above costs are based on the AWS pricing as of April 2025 and based on the Europe (Frankfurt) region.</p>
]]></content:encoded></item><item><title><![CDATA[AWS VPC Subnetting Guide: Step-by-Step Planning Made Easy with Visual Tools]]></title><description><![CDATA[Introduction
If you've ever tried creating a custom VPC in AWS, you’ve probably run into the question:"What CIDR block should I use?"
For many DevOps/Cloud engineers, subnetting feels like a leftover from the old networking world — full of slashes, b...]]></description><link>https://devopsdetours.com/aws-vpc-subnetting-guide-step-by-step-planning-made-easy-with-visual-tools</link><guid isPermaLink="true">https://devopsdetours.com/aws-vpc-subnetting-guide-step-by-step-planning-made-easy-with-visual-tools</guid><category><![CDATA[Cloud Networking]]></category><category><![CDATA[AWS]]></category><category><![CDATA[vpc]]></category><category><![CDATA[subnetting]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Mon, 24 Mar 2025 16:11:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742832378884/e5d0bb47-1bab-441f-a300-5f9564bacfd9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>If you've ever tried creating a custom VPC in AWS, you’ve probably run into the question:<br /><strong>"What CIDR block should I use?"</strong></p>
<p>For many DevOps/Cloud engineers, subnetting feels like a leftover from the old networking world — full of slashes, bit counts, and IP math. But in reality, subnetting is <strong>still fundamental</strong> when designing resilient, scalable infrastructure in the cloud.</p>
<p>Whether you're allocating public and private subnets, planning for multiple availability zones, or avoiding IP conflicts across environments, <strong>CIDR block planning matters</strong>.</p>
<p>Thankfully, there's a tool that takes the guesswork out of subnetting: the <a target="_blank" href="https://www.davidc.net/sites/default/subnets/subnets.html">Visual Subnet Calculator</a>. In this post, I’ll show you how to use it to design your VPC layout in minutes — with no subnetting headaches — and explain how it helps prevent common mistakes cloud engineers make when dealing with CIDRs.</p>
<p>Let’s dive in.</p>
<hr />
<h2 id="heading-why-subnetting-still-matters-in-the-cloud">Why Subnetting Still Matters in the Cloud</h2>
<p>With the rise of serverless, managed services, and container orchestration, you might think subnetting is an old-school concern — something from the days of rack-mounted switches and spanning tree nightmares.</p>
<p>But that’s a dangerous assumption.</p>
<p>In cloud environments like AWS, subnetting is still the backbone of <strong>network segmentation, routing, and security</strong>. Here's why it matters:</p>
<h3 id="heading-1-every-vpc-needs-a-cidr-block">1. Every VPC Needs a CIDR Block</h3>
<p>When you create a Virtual Private Cloud in AWS, you must assign it an IPv4 CIDR block (e.g., <code>10.0.0.0/16</code>). This CIDR defines your <strong>IP address space</strong>, and all your subnets must live within it.</p>
<p>Once set, this CIDR becomes the <strong>foundation for routing tables, NAT behavior, and internal DNS resolution</strong>. There's no way around it — if you miscalculate this upfront, you'll end up re-architecting later.</p>
<h3 id="heading-2-subnets-define-network-zones">2. Subnets Define Network Zones</h3>
<p>Subnets are not just IP ranges — they’re <strong>logical zones</strong>. For example:</p>
<ul>
<li><p>Public subnets = internet-facing EC2 instances, Load Balancers</p>
</li>
<li><p>Private subnets = databases, internal services</p>
</li>
<li><p>Isolated subnets = services without outbound access</p>
</li>
</ul>
<p>Each of these gets its own CIDR range. That means you have to plan:</p>
<ul>
<li><p>How many IPs are needed per zone</p>
</li>
<li><p>How many AZs you'll spread across</p>
</li>
<li><p>Whether you’ll reserve space for future growth</p>
</li>
</ul>
<h3 id="heading-3-ip-conflicts-break-everything">3. IP Conflicts Break Everything</h3>
<p>If your subnets overlap — or if your on-prem network overlaps with your cloud CIDR — you’ll run into <strong>VPC peering failures, blackholed packets, or broken VPNs</strong>.</p>
<p>Careless subnetting is one of the top causes of invisible infrastructure issues in hybrid and multi-cloud environments.</p>
<h3 id="heading-4-security-depends-on-good-boundaries">4. Security Depends on Good Boundaries</h3>
<p>Security groups and NACLs rely on subnet-level granularity. If your network isn’t segmented properly — say, everything is crammed into a <code>/16</code> — it becomes much harder to isolate workloads or enforce zero-trust policies.</p>
<p>Proper subnetting allows you to <strong>model the principle of least privilege</strong> at the network layer.</p>
<p><strong>In Short:</strong></p>
<p>Even in 2025, subnetting is more than a legacy skill — it’s an essential part of responsible cloud infrastructure design.</p>
<p>The problem is, most engineers either:</p>
<ul>
<li><p>Overestimate how much space they need (and waste IPs), or</p>
</li>
<li><p>Underestimate it and get boxed in</p>
</li>
</ul>
<p>This is where the Visual Subnet Calculator proves to be an invaluable tool. Before diving into how it works, though, we need to understand the fundamentals of designing a VPC and its subnets.</p>
<hr />
<h2 id="heading-how-to-design-your-vpc-and-subnets-step-by-step">How to Design Your VPC and Subnets (Step by Step)</h2>
<p>Before you start clicking around in a subnet calculator, you need to answer a few critical design questions. This ensures your VPC setup is scalable, conflict-free, and aligned with your infrastructure goals.</p>
<p>Follow this checklist step by step — by the end, you’ll know exactly what CIDR to assign to your VPC and how to split it into subnets.</p>
<h3 id="heading-step-1-will-this-vpc-connect-to-other-vpcs-or-networks">Step 1: Will This VPC Connect to Other VPCs or Networks?</h3>
<ul>
<li><p><strong>If Yes</strong>: Avoid using overlapping IP ranges.</p>
<ul>
<li><p>Stick to less common private ranges like <code>10.1.0.0/16</code>, <code>172.31.0.0/16</code>, or even <code>100.64.0.0/10</code> (for carrier-grade NAT).</p>
</li>
<li><p>This avoids conflicts with on-prem networks, peered VPCs, or VPNs.</p>
</li>
</ul>
</li>
<li><p><strong>If No</strong>: You have more freedom. <code>10.0.0.0/16</code> is fine for isolated workloads.</p>
</li>
</ul>
<h3 id="heading-step-2-how-many-environments-do-you-need">Step 2: How Many Environments Do You Need?</h3>
<p>Typical examples:</p>
<ul>
<li><p>Dev, Staging, Prod</p>
</li>
<li><p>QA, Test, Demo</p>
</li>
</ul>
<blockquote>
<p>Multiply environments × zones to estimate subnet count.</p>
<p>Example:<br />If you want <strong>3 environments</strong>, each spread across <strong>3 AZs</strong>, you’ll need <strong>9 subnets</strong>.</p>
</blockquote>
<h3 id="heading-step-3-do-you-need-public-amp-private-zones">Step 3: Do You Need Public &amp; Private Zones?</h3>
<p>Many architectures use:</p>
<ul>
<li><p>Public subnets → for Load Balancers, NAT Gateways</p>
</li>
<li><p>Private subnets → for EC2, RDS, containers</p>
</li>
<li><p>Optionally isolated subnets → for sensitive services with no outbound access</p>
</li>
</ul>
<blockquote>
<p>Multiply again. If you want public + private, you now need:<br /><strong>9 × 2 = 18 subnets</strong> total.</p>
</blockquote>
<h3 id="heading-step-4-estimate-the-number-of-hosts-per-subnet">Step 4: Estimate the Number of Hosts per Subnet</h3>
<p>Use this table to decide your subnet mask (<code>/19</code>, <code>/20</code>, <code>/21</code>, etc.)</p>
<p>AWS VPC subnet size reference (based on a /16 CIDR block)</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Subnet Mask</strong></td><td><strong># of Subnets in a /16</strong></td><td><strong>Total IPs</strong></td><td><strong>Usable Hosts</strong></td></tr>
</thead>
<tbody>
<tr>
<td><code>/16</code></td><td>1</td><td>65,536</td><td><strong>65,534</strong></td></tr>
<tr>
<td><code>/17</code></td><td>2</td><td>32,768</td><td><strong>32,766</strong></td></tr>
<tr>
<td><code>/18</code></td><td>4</td><td>16,384</td><td><strong>16,382</strong></td></tr>
<tr>
<td><code>/19</code></td><td>8</td><td>8,192</td><td><strong>8,190</strong></td></tr>
<tr>
<td><code>/20</code></td><td>16</td><td>4,096</td><td><strong>4,094</strong></td></tr>
<tr>
<td><code>/21</code></td><td>32</td><td>2,048</td><td><strong>2,046</strong></td></tr>
<tr>
<td><code>/22</code></td><td>64</td><td>1,024</td><td><strong>1,022</strong></td></tr>
<tr>
<td><code>/23</code></td><td>128</td><td>512</td><td><strong>510</strong></td></tr>
<tr>
<td><code>/24</code></td><td>256</td><td>256</td><td><strong>254</strong></td></tr>
<tr>
<td><code>/25</code></td><td>512</td><td>128</td><td><strong>126</strong></td></tr>
<tr>
<td><code>/26</code></td><td>1024</td><td>64</td><td><strong>62</strong></td></tr>
<tr>
<td><code>/27</code></td><td>2048</td><td>32</td><td><strong>30</strong></td></tr>
<tr>
<td><code>/28</code></td><td>4096</td><td>16</td><td><strong>14</strong></td></tr>
</tbody>
</table>
</div><p><strong>Quick Guidance:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Use Case</strong></td><td><strong>Recommended Mask</strong></td></tr>
</thead>
<tbody>
<tr>
<td>NAT Gateway, Bastion Host</td><td><code>/28</code> or <code>/27</code></td></tr>
<tr>
<td>Public Web App Tier</td><td><code>/24</code> or <code>/23</code></td></tr>
<tr>
<td>Container Clusters / Auto Scaling</td><td><code>/21</code> or <code>/20</code></td></tr>
<tr>
<td>Data Layer (RDS, ElastiCache)</td><td><code>/24</code></td></tr>
<tr>
<td>High-throughput services (e.g., EKS, Kafka)</td><td><code>/19</code> or larger</td></tr>
</tbody>
</table>
</div><blockquote>
<p>Example:<br />If each subnet needs up to 500 containers or EC2s, go with <code>/22</code> or <code>/21</code>.</p>
</blockquote>
<h3 id="heading-step-5-choose-a-vpc-cidr-that-can-fit-all-your-subnets">Step 5: Choose a VPC CIDR that Can Fit All Your Subnets</h3>
<p>Your VPC CIDR needs to be big enough to contain all planned subnets <strong>with some room to grow</strong>.</p>
<blockquote>
<p>Example: You want 20 subnets with <code>/21</code> ranges.<br />Each <code>/21</code> = 2,048 IPs<br />→ 20 × 2,048 = 40,960 IPs<br />Choose at least a <code>/16</code> VPC (which provides 65,536 IPs).</p>
</blockquote>
<h3 id="heading-step-6-document-your-subnet-layout">Step 6: Document Your Subnet Layout</h3>
<p>Before creating anything in AWS, make a quick plan:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Subnet Name</td><td>CIDR Range</td><td>AZ</td><td>Type</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td><code>dev-public-a</code></td><td>10.0.0.0/21</td><td><code>us-east-1a</code></td><td>Public</td><td>Dev LB / NAT</td></tr>
<tr>
<td><code>dev-private-a</code></td><td>10.0.8.0/21</td><td><code>us-east-1a</code></td><td>Private</td><td>Dev app layer</td></tr>
<tr>
<td><code>prod-public-a</code></td><td>10.0.16.0/21</td><td><code>us-east-1a</code></td><td>Public</td><td>Prod LB / NAT</td></tr>
<tr>
<td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr>
</tbody>
</table>
</div><p>Now you’re ready to plug these ranges into AWS or a subnet calculator.</p>
<h3 id="heading-steps-summary">Steps summary</h3>
<p>To make an informed decision about your VPC’s CIDR and subnet layout, follow these six steps:</p>
<ol>
<li><p><strong>Check for External Connectivity</strong><br /> Will your VPC connect to on-prem, other VPCs, or VPNs?<br /> → Avoid overlapping IP ranges.</p>
</li>
<li><p><strong>Define Your Environments</strong><br /> How many environments (e.g., dev, staging, prod) do you need?</p>
</li>
<li><p><strong>Decide on Subnet Types</strong><br /> Will you have public, private, or isolated subnets for each environment?</p>
</li>
<li><p><strong>Estimate Hosts per Subnet</strong><br /> Pick subnet sizes based on expected host counts. Use the <code>/16</code>–<code>/28</code> table for guidance.</p>
</li>
<li><p><strong>Calculate Total IP Need</strong><br /> Multiply the number of subnets × IPs per subnet → choose a large enough VPC CIDR.</p>
</li>
<li><p><strong>Document Your Plan</strong><br /> Create a subnet table before provisioning in AWS.</p>
</li>
</ol>
<h3 id="heading-example-plan-for-a-multi-az-multi-env-app">Example: Plan for a Multi-AZ, Multi-Env App</h3>
<p><strong>Goals:</strong></p>
<ul>
<li><p>Environments: <strong>Dev</strong>, <strong>Staging</strong>, <strong>Prod</strong></p>
</li>
<li><p>Availability Zones: <strong>3 AZs</strong></p>
</li>
<li><p>Subnet Types: <strong>Public</strong> and <strong>Private</strong></p>
</li>
<li><p>Each subnet should support ~500 IPs</p>
</li>
</ul>
<p><strong>Step-by-step Breakdown:</strong></p>
<ol>
<li><p><strong>Connectivity</strong>: Will connect to on-prem → use <code>10.1.0.0/16</code> to avoid overlap.</p>
</li>
<li><p><strong>Environments</strong>: 3 total (Dev, Staging, Prod)</p>
</li>
<li><p><strong>Subnets per AZ</strong>:</p>
<ul>
<li><p>Each env needs <strong>3 public + 3 private</strong> = <strong>6 subnets</strong></p>
</li>
<li><p>Total across 3 envs: <strong>18 subnets</strong></p>
</li>
</ul>
</li>
<li><p><strong>Subnet size</strong>:</p>
<ul>
<li>Use <code>/22</code> → gives 1,022 usable IPs (more than enough)</li>
</ul>
</li>
<li><p><strong>IP count</strong>:</p>
<ul>
<li>18 subnets × 1,022 = <strong>18,396 IPs</strong> → <code>/16</code> (65,534 usable) is sufficient and it has room to grow</li>
</ul>
</li>
<li><p><strong>Document</strong>:</p>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Subnet Name</td><td>CIDR Range</td><td>AZ</td><td>Type</td></tr>
</thead>
<tbody>
<tr>
<td>dev-public-a</td><td>10.1.0.0/22</td><td>us-east-1a</td><td>Public</td></tr>
<tr>
<td>dev-private-a</td><td>10.1.4.0/22</td><td>us-east-1a</td><td>Private</td></tr>
<tr>
<td>staging-public-b</td><td>10.1.8.0/22</td><td>us-east-1b</td><td>Public</td></tr>
<tr>
<td>prod-private-c</td><td>10.1.12.0/22</td><td>us-east-1c</td><td>Private</td></tr>
<tr>
<td>...</td><td>...</td><td>...</td><td>...</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-meet-the-visual-subnet-calculator">Meet the Visual Subnet Calculator</h2>
<p>Once you’ve chosen your VPC CIDR block — let’s say <code>10.0.0.0/16</code> — Your next step is to <strong>divide it into subnets</strong> that match your infrastructure needs. This is where most people either:</p>
<ul>
<li><p>Overestimate and waste thousands of IPs per subnet, or</p>
</li>
<li><p>Underestimate and run out of space when scaling up.</p>
</li>
</ul>
<p>That’s where the <a target="_blank" href="https://www.davidc.net/sites/default/subnets/subnets.html">Visual Subnet Calculator</a> comes in — a brilliant, no-nonsense tool that mimics exactly how you plan subnets inside the AWS VPC console.</p>
<h3 id="heading-step-by-step-example">Step-by-step Example</h3>
<p>Imagine you’ve created a VPC with the range <code>10.0.0.0/16</code>. Now you want to split it into smaller blocks — for example, <strong>four subnets with around 8,000 usable IPs each</strong>.</p>
<p>Try This:</p>
<ol>
<li>Enter <code>10.0.0.0</code> and <code>/16</code> into the Visual Subnet Calculator and click update.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742819458865/074b3228-eb4d-4805-83cb-752decf79757.png" alt class="image--center mx-auto" /></p>
<ol start="2">
<li><p>Click <strong>Update</strong> — this shows the full <code>/16</code> block.</p>
</li>
<li><p>Click <strong>Divide</strong> on the <code>/16</code> — you’ll get 2 × <code>/17</code>.</p>
</li>
<li><p>Click <strong>Divide</strong> again — each <code>/17</code> becomes 2 × <code>/18</code>.</p>
</li>
<li><p>Keep clicking <strong>Divide</strong> until you reach <code>/19</code>.</p>
</li>
</ol>
<p>At the <code>/19</code> level, the calculator shows you <strong>8 subnets</strong>, each with:</p>
<ul>
<li><p><strong>8190 usable hosts</strong></p>
</li>
<li><p>Ranges like:</p>
<ul>
<li><p><code>10.0.0.0 - 10.0.31.255</code></p>
</li>
<li><p><code>10.0.32.0 - 10.0.63.255</code></p>
</li>
<li><p>...</p>
</li>
<li><p><code>10.0.224.0 - 10.0.255.255</code></p>
</li>
</ul>
</li>
</ul>
<p>This is how it’s done in action:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742830023113/402bf8d7-8415-4ea3-b6ad-8aab180ede0c.gif" alt class="image--center mx-auto" /></p>
<p>The final result:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742819911832/765ae174-510b-4f94-aec9-4a6bc5e1d8e7.png" alt class="image--center mx-auto" /></p>
<p>You can also undo your divisions!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742830206794/7c86b9b9-b746-4e44-92f8-75c6379c6518.gif" alt class="image--center mx-auto" /></p>
<p><strong>Why It’s So Useful</strong></p>
<ul>
<li><p>Instantly shows <strong>usable IPs</strong> per subnet</p>
</li>
<li><p>Clearly displays <strong>start/end IPs</strong></p>
</li>
<li><p>Lets you <strong>experiment interactively</strong> with subnet sizes</p>
</li>
<li><p>Supports <strong>joining subnets</strong> back together if you go too far</p>
</li>
</ul>
<p>In short, it’s exactly what most engineers need before going into the AWS Console or writing Terraform code.</p>
<h3 id="heading-how-can-i-predict-this-in-advance">How Can I Predict This in Advance?</h3>
<p>You might be wondering:</p>
<blockquote>
<p>“How do I know how many <code>/19</code>s fit inside a <code>/16</code> <em>before</em> I use a visualizer?”</p>
</blockquote>
<p>For that, there’s a helpful tool: the <a target="_blank" href="https://www.iptp.net/en_US/iptp-tools/ip-calculator/">IPTP IP Subnet Calculator</a></p>
<p>This tool lets you:</p>
<ul>
<li><p>Input a CIDR (like <code>10.0.0.0/16</code>)</p>
</li>
<li><p>Choose the desired <strong>subnet mask</strong> (like <code>/19</code>)</p>
</li>
<li><p>See the <strong>exact number of possible subnets</strong> and get <strong>full details</strong> like usable IPs, broadcast addresses, wildcard masks, and binary subnet masks.</p>
</li>
</ul>
<p>Example:</p>
<ul>
<li><p>Input: <code>10.0.0.0/16</code></p>
</li>
<li><p>Desired subnet: <code>/19</code></p>
</li>
<li><p>Result: <strong>8 subnets</strong> — from <code>10.0.0.0/19</code> to <code>10.0.224.0/19</code></p>
</li>
<li><p>Each has <strong>8190 usable IPs in theory</strong>, but remember:</p>
<blockquote>
<p><strong>In AWS VPCs, only 8,187 are actually usable</strong> per subnet due to AWS reserving 5 IPs.</p>
</blockquote>
</li>
</ul>
<p><em>(See IPTP calculator screenshots below for full breakdown)</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742906782683/f601255c-7812-4436-91a6-4c65bdfde079.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742906805863/e6fffcdb-31ba-4852-be3b-9efb25a6275e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742820691385/d494d6af-9149-4570-8595-e277c89874ca.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742820712124/8675f553-363d-4c21-b3e9-7819e1c8d7f1.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-bonus-avoiding-common-subnetting-mistakes">Bonus: Avoiding Common Subnetting Mistakes</h2>
<p>Using a visual subnet calculator doesn’t just make planning easier — it helps you <strong>avoid critical mistakes</strong> that are surprisingly common, especially when working at scale in cloud environments.</p>
<p>Here’s how tools like <a target="_blank" href="https://www.davidc.net/sites/default/subnets/subnets.html">David C’s Visual Subnet Calculator</a> save you from disaster:</p>
<h3 id="heading-1-overlapping-subnets">1. Overlapping Subnets</h3>
<p>Manually assigning CIDR blocks increases the risk of defining two subnets with overlapping ranges — something AWS won’t allow, but Terraform and cloudformation users can trip over easily.</p>
<p>The calculator shows you <strong>visually distinct ranges</strong> as you divide subnets, so you can ensure clean, non-overlapping allocations across all Availability Zones and environments.</p>
<h3 id="heading-2-ip-exhaustion">2. IP Exhaustion</h3>
<p>You might think a <code>/24</code> is enough — until autoscaling kicks in or you add new services.</p>
<p>The tool shows <strong>exact usable IPs per subnet</strong>, so you can confidently choose between a <code>/23</code>, <code>/22</code>, <code>/21</code>, etc, without guesswork — and leave room for future growth.</p>
<h3 id="heading-3-poor-network-segmentation">3. Poor Network Segmentation</h3>
<p>Too often, people just create a <code>/16</code> VPC and drop everything into a few massive subnets. That kills visibility, auditability, and fine-grained control.</p>
<p>With this tool, you can <strong>design subnet boundaries intentionally</strong> — separating public-facing components, backend services, database layers, and isolated internal apps using CIDR boundaries that match your architecture.</p>
<h3 id="heading-4-no-room-for-expansion">4. No Room for Expansion</h3>
<p>Let’s say you deploy in two AZs today — what happens when your app needs a third?</p>
<p>The visual layout helps you <strong>reserve IP space for future AZs or new environments</strong>, even if you're not using them yet. This makes your VPC more flexible and future-proof.</p>
<hr />
<h2 id="heading-want-to-use-visual-subnet-calculator-offline">Want to Use Visual Subnet Calculator Offline?</h2>
<p><a target="_blank" href="https://drive.google.com/file/d/1HO1S8LSXNg1LM2boKPv200u4cyiWqVpt/view?usp=sharing"><strong>Download it here</strong></a></p>
<p>Or you can even run it with Docker:</p>
<pre><code class="lang-plaintext">git clone https://github.com/davidc/subnets.git
cd subnets
docker build . -t subnets
docker run -d -p 5001:80 --name subnets subnets
</code></pre>
<p>Then open http://localhost and start visualizing your subnets without relying on an external site.</p>
<p><a target="_blank" href="https://github.com/shahinam2/visual-subnet-calculator">Visual subnet calculator github repo</a></p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Subnetting doesn’t have to be painful. With the right tools and a bit of planning, you can design scalable, conflict-free VPCs that support any architecture, from simple web apps to multi-environment, multi-AZ cloud platforms.</p>
<p>The <a target="_blank" href="https://www.davidc.net/sites/default/subnets/subnets.html">Visual Subnet Calculator</a> is a must-have for DevOps engineers, SREs, and cloud architects. It bridges the gap between traditional networking and modern cloud practices, offering:</p>
<ul>
<li><p>Clear visual subnet breakdowns</p>
</li>
<li><p>Instant feedback on usable IPs</p>
</li>
<li><p>No guesswork when planning CIDR ranges</p>
</li>
</ul>
<p><strong>Bookmark it</strong>, share it with your team, and make it part of your VPC design workflow.</p>
]]></content:encoded></item><item><title><![CDATA[How to Efficiently Handle Several AWS Accounts: 6 Proven Methods]]></title><description><![CDATA[Introduction
As cloud practitioners, we follow best practices and use multi-account environments. This frequently led to situations where we were cross-referencing resources or viewing logs across multiple accounts. When using the AWS console this be...]]></description><link>https://devopsdetours.com/how-to-efficiently-handle-several-aws-accounts-6-proven-methods</link><guid isPermaLink="true">https://devopsdetours.com/how-to-efficiently-handle-several-aws-accounts-6-proven-methods</guid><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Wed, 12 Feb 2025 20:56:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739137627783/a24a01a5-ea8b-44fb-9b0a-8152ecafdccf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>As cloud practitioners, we follow best practices and use <a target="_blank" href="https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html">multi-account environments</a>. This frequently led to situations where we were cross-referencing resources or viewing logs across multiple accounts. When using the AWS console this becomes quite painful as only one account and region is accessible at a time per browser.</p>
<p>Yes, one way to solve this is to simply stop using the console and develop your own abstractions and visualisation layer on top of AWS’s APIs. However, the native console can be a useful tool for viewing your cloud resources as it provides a user-friendly interface with real-time data, built-in visualizations, and quick access to service-specific dashboards. The AWS console enables intuitive navigation, reducing the need for CLI commands or API calls for simple tasks. It also offers service-specific features like CloudWatch dashboards, cost breakdowns in AWS Billing, and graphical representations of networking components such as VPCs.</p>
<h2 id="heading-tldr">TL;DR</h2>
<p>Managing multiple AWS accounts efficiently requires selecting the right tool based on workflow preferences and security requirements. Here is an overview of each tool in this post:</p>
<ul>
<li><p><strong>CloudGlance</strong> is ideal for those who need a <strong>GUI-based</strong> solution to <strong>manage multiple AWS accounts</strong>, <strong>organize credentials</strong>, and <strong>simplify SSH and port forwarding</strong>. It supports <strong>Firefox Containers</strong> for multi-session access and provides <strong>Git integration</strong> for team collaboration. This makes it particularly useful for <strong>DevOps engineers</strong> and <strong>teams managing complex AWS environments</strong>.</p>
</li>
<li><p><strong>Granted</strong> is a <strong>CLI-first tool</strong> that <strong>optimizes role-switching</strong> while keeping <strong>credentials encrypted</strong>. It is designed for users who prefer a <strong>fast, terminal-based approach</strong> to <strong>access multiple AWS accounts securely</strong>. While it lacks a GUI and SSH support, it excels in <strong>secure authentication and speed</strong>.</p>
</li>
<li><p><strong>Firefox Extensions</strong> like <strong>Multi-Account Containers</strong> offer a <strong>lightweight and browser-based</strong> way to manage <strong>AWS sessions</strong>. These extensions help with <strong>isolating AWS accounts in different containers</strong>, making them a good <strong>quick solution</strong> for those who mainly work within the <strong>AWS Console</strong>. However, they lack advanced features like credential management/encryption, SSH, or team collaboration.</p>
</li>
<li><p><strong>AWS Extend Switch Roles</strong> is a <strong>browser extension for Chrome and Firefox</strong> that enhances AWS’s native role-switching experience. It provides a <strong>quick and convenient way</strong> to <strong>switch between IAM roles</strong> inside the AWS Console without needing to re-enter credentials. However, it does not offer <strong>multi-session support</strong> like CloudGlance or AWS's built-in solution, nor does it handle SSH or team collaboration.</p>
</li>
<li><p><strong>AWS Built-in Multi-Account Manager</strong>, introduced in <strong>January 2025</strong>, provides a <strong>seamless multi-session experience directly within the AWS Console</strong>. It allows up to <strong>five concurrent sessions</strong> across different AWS accounts, eliminating the need for third-party tools for <strong>basic account switching</strong>. However, it does <strong>not support SSH, port forwarding, or external credential management</strong>.</p>
</li>
</ul>
<hr />
<h2 id="heading-cloud-glance-gui-based-aws-account-manager"><strong>Cloud Glance — GUI-Based AWS Account Manager</strong></h2>
<h3 id="heading-intro">Intro</h3>
<p>Managing multiple AWS accounts across different clients can be a daunting task, especially when switching between environments, accessing private networks, and troubleshooting infrastructure. <strong>CloudGlance</strong> is designed to simplify this process, providing a unified interface to streamline AWS account management and development workflows.</p>
<p>With <strong>CloudGlance</strong>, you can:</p>
<ul>
<li><p><strong>Group and visualize</strong> all your AWS accounts in a single, intuitive dashboard.</p>
</li>
<li><p><strong>Easily access AWS Console</strong> links, making daily monitoring and troubleshooting more efficient.</p>
</li>
<li><p><strong>Manage SSH connections</strong> by resolving port forwarding conflicts across multiple projects and clients.</p>
</li>
</ul>
<p>For developers and DevOps engineers juggling several AWS accounts, CloudGlance eliminates the hassle of navigating through different environments. It provides quick access to CloudWatch Dashboards, service logs, and other AWS resources—all in one place.</p>
<p>Additionally, when working with SSH, managing VPNs or bastion hosts across multiple AWS environments can lead to port forwarding conflicts. CloudGlance offers visibility into local port usage and their respective forwarding destinations, preventing clashes and simplifying connectivity.</p>
<p><strong>Currently free</strong>, CloudGlance may introduce premium options in the future. Stay tuned!</p>
<h3 id="heading-featureshttpsdocscloudglancedevguideintroductionwhat-ishtmlfeatures">Features<a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features"><strong>​</strong></a></h3>
<ul>
<li><p>CloudGlanc<a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features">e</a> manages your <code>~/.aws/credentials</code> so that you don't have to edit files manually. It's basically a <strong>GUI for your</strong> <code>~/.aws</code> files.</p>
</li>
<li><p>Open multi<a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features">p</a>le AWS consoles at the same time with <strong>Firefox Containers</strong>.</p>
</li>
<li><p>CloudGlanc<a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features">e</a> creates temporary credentials with STS from either your <strong>IAM Role, IAM User, IAM Federated login or AWS SSO</strong> as stored in your <code>~/.aws/credentials</code>. <strong>MFA</strong> is supported.</p>
</li>
<li><p><strong>Export</strong> tem<a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features">p</a>orary STS credentials <strong>to your terminal or to another AWS profile.</strong></p>
</li>
<li><p><strong>Single-cli</strong><a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features"><strong>c</strong></a><strong>k navigation into your most loved AWS service console pages</strong> with the help of bookmarks.</p>
</li>
<li><p><strong>Visualize</strong> <a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features"><strong>&amp;</strong></a> <strong>manage connections</strong> between bastions and available local ports used <strong>for port forwarding.</strong></p>
</li>
<li><p>Both class<a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features">i</a>c <strong>SSH(.pem) and AWS SSM</strong> are supported for <strong>port forwarding</strong>.</p>
</li>
<li><p>Built-in <strong>G</strong><a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features"><strong>i</strong></a><strong>t support</strong> to manage CloudGlance Profiles across your teams. Push, pull and merge JSON configuration profiles at the click of a button.</p>
</li>
<li><p>The option to <strong>encrypt sensitive information</strong> in your <code>~/.aws/credentials</code> like the aws_access_key_id and the aws_secret_access_key without breaking normal AWS CLI commands.</p>
</li>
<li><p>The <strong>Cloud</strong> <a target="_blank" href="https://docs.cloudglance.dev/guide/introduction/what-is.html#features"><strong>G</strong></a><strong>lance CLI</strong> can communicate to the Cloud Glance GUI to obtain temporary credentials and prompt for any user input required, like MFA and SSO.</p>
</li>
</ul>
<h3 id="heading-installation">Installation</h3>
<p>Before installing <strong>CloudGlance</strong>, ensure that you have <strong>AWS CLI Version 2</strong> installed and properly configured on your system.</p>
<p><strong>Download AWS CLI Version 2:</strong></p>
<p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html</a></p>
<p>Configuring AWS Accounts:</p>
<p>To improve account recognition within CloudGlance, you can assign meaningful names to your AWS accounts in your <code>.aws</code> configuration files. This makes it easier to distinguish between different environments.</p>
<p>Here’s an example of how to define multiple AWS accounts with custom names in your <code>~/.aws/config</code> file:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739369135516/8a561f71-4a0c-4107-a8d8-74b2fb1416e3.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739369259327/c689330c-444e-49ff-9d3d-b47a5a47a86c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739369301768/5adfe8ab-20b6-4b2c-9c33-62c224b61656.png" alt class="image--center mx-auto" /></p>
<p><strong>Install Firefox and Required Extensions</strong></p>
<p>To ensure <strong>CloudGlance</strong> works seamlessly, install <strong>Mozilla Firefox</strong> along with the necessary extensions:</p>
<ul>
<li><p><strong>Firefox Multi-Account Containers</strong> - Helps separate AWS accounts into different containers for better organization.</p>
<ul>
<li><a target="_blank" href="https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/">https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/</a></li>
</ul>
</li>
<li><p><strong>Open External Links in a Container</strong> – Ensures AWS console links open in the correct container.</p>
<ul>
<li><a target="_blank" href="https://addons.mozilla.org/en-US/firefox/addon/open-url-in-container/">https://addons.mozilla.org/en-US/firefox/addon/open-url-in-c</a><a target="_blank" href="https://cloudglance.dev/download">ontainer/</a></li>
</ul>
</li>
</ul>
<p><strong>Download CloudGlance</strong></p>
<p><strong>CloudGlance</strong> is available for <strong>Windows, macOS, and Linux</strong>.</p>
<ul>
<li><a target="_blank" href="https://cloudglance.dev/download">https://cloudglance.dev/download</a></li>
</ul>
<p><strong>Running CloudGlance on Linux</strong></p>
<p>If you are using <strong>Linux</strong>, you may need to run the following commands to prevent errors:</p>
<pre><code class="lang-bash">chmod +x CloudGlance-0.0.131.AppImage
./CloudGlance-0.0.131.AppImage --no-sandbox
</code></pre>
<h3 id="heading-configuration">Configuration</h3>
<p>In this demo, we will:</p>
<ul>
<li><p>Create a <strong>profile</strong> in CloudGlance</p>
</li>
<li><p>Add <strong>two AWS IAM accounts</strong> to simplify access to their AWS Consoles with a single click</p>
</li>
</ul>
<p>First, create a group:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739369545703/22c3da8a-3473-4e59-b479-89715e6a4d9c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739369587117/0ed3caa3-bfae-4802-9063-da34edc16a77.png" alt class="image--center mx-auto" /></p>
<p>You can create <strong>multiple groups</strong> in CloudGlance to better organize and manage your AWS accounts. For example, you might create separate groups for <strong>clients, projects, or environments</strong> (e.g., <strong>Development, Staging, Production</strong>).</p>
<p>Once your group is created, open it and start adding <strong>AWS IAM profiles</strong>. This allows you to quickly access different AWS accounts with a single click, improving efficiency when switching between environments.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739369713288/80141081-8a50-4c3c-8359-7c86ea2cd51a.png" alt class="image--center mx-auto" /></p>
<p>Depending on your organization's setup, you need to select the appropriate <strong>profile type</strong> when adding AWS IAM accounts in CloudGlance. Below is a brief overview of each type:</p>
<p><strong>1. IAM User – Role</strong></p>
<ul>
<li><p><strong>Use Case</strong>: You have an existing <strong>IAM User</strong> who needs to <strong>assume an IAM Role</strong> (e.g., <code>arn:aws:iam::123456789012:role/YourRoleName</code>).</p>
</li>
<li><p><strong>Source Profile</strong>: Often requires specifying a “source” AWS profile (with user credentials) that can assume the target role.</p>
</li>
<li><p><strong>Role ARN &amp; External ID</strong>: You’ll provide the <strong>Role ARN</strong> and optionally an <strong>external ID</strong> if the role trust policy requires it.</p>
</li>
<li><p><strong>STS Duration</strong>: Lets you set how long the temporary session is valid.</p>
</li>
</ul>
<p><strong>2. IAM User</strong></p>
<ul>
<li><p><strong>Use Case</strong>: You have a standard <strong>IAM User</strong> with <strong>Access Key</strong> and <strong>Secret Key</strong>.</p>
</li>
<li><p><strong>Direct Credentials</strong>: Stores (or references) those keys for direct usage without an intermediate role.</p>
</li>
<li><p><strong>MFA Option</strong>: If your IAM policies require MFA, CloudGlance can prompt you for a code.</p>
</li>
<li><p><strong>Federated Login</strong>: Typically can generate STS tokens to open the AWS Console or export environment variables.</p>
</li>
</ul>
<p><strong>3. SSO Profile</strong></p>
<ul>
<li><p><strong>Use Case</strong>: When using <strong>AWS Single Sign-On (IAM Identity Center)</strong> to manage user credentials rather than an IAM user and password.</p>
</li>
<li><p><strong>Browser-Based Flow</strong>: You typically authenticate in a browser with your SSO provider (Okta, Azure AD, etc.), and CloudGlance integrates with that.</p>
</li>
<li><p><strong>Automatic Token Refresh</strong>: It retrieves short-lived credentials after you log in through SSO, often with MFA included in the SSO flow.</p>
</li>
</ul>
<p>In this demo, I'll walk you through setting up <strong>two personal AWS accounts</strong> in CloudGlance using:<br /><strong>IAM User → Federated Login</strong></p>
<p>The first account is called <strong>"shahin-new"</strong>. Below is the configuration needed to add it to CloudGlance:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739370259714/caacd2da-8e8c-4a4f-8b68-e7f9e6f699b6.png" alt class="image--center mx-auto" /></p>
<p>Now that we have defined our AWS profile in the <code>~/.aws/config</code> file, let’s add it to <strong>CloudGlance</strong> for seamless access to the AWS Console.</p>
<ol>
<li><p><strong>Select "IAM User"</strong> – Choose the <strong>IAM User</strong> tab to configure access using AWS IAM credentials.</p>
</li>
<li><p><strong>Select "Federated Login"</strong> – This allows login through AWS IAM credentials instead of CLI-only authentication.</p>
</li>
<li><p><strong>Enter a Profile Name</strong> – This is the display name for the profile in CloudGlance. In this example, we use <code>"shahin-new"</code>.</p>
</li>
<li><p><strong>Firefox Container Name (Optional)</strong> – If using <strong>Firefox Multi-Account Containers</strong>, set a container name (e.g., <code>"shahin-new"</code>). Keeping it the same as the profile name helps with organization.</p>
</li>
<li><p><strong>Choose a Profile Color (Optional)</strong> – Assigning colors helps distinguish accounts visually.</p>
</li>
<li><p><strong>Select the AWS Profile</strong> – Choose the AWS profile you previously configured in the <code>~/.aws/config</code> file.</p>
</li>
<li><p><strong>Access Key ID (Auto-Filled)</strong> – This field is automatically retrieved from your AWS credentials file.</p>
</li>
<li><p><strong>Secret Access Key (Auto-Filled)</strong> – Also auto-filled from your AWS credentials.</p>
</li>
<li><p><strong>Region (Auto-Filled)</strong> – The default AWS region is automatically retrieved. You can modify this if needed.</p>
</li>
<li><p><strong>Assume Policies (Optional)</strong> – If your IAM user has permission to assume roles, you can specify <strong>any valid IAM policy ARN</strong>.</p>
</li>
<li><p>In this example, we use:<br /><code>arn:aws:iam::aws:policy/AdministratorAccess</code><br />This grants <strong>full administrator permissions</strong> since the account owner needs unrestricted access.</p>
</li>
</ol>
<p>Follow the same steps to add more AWS accounts, grouping them as needed for easier management.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739370882133/4100d759-c70e-473c-a654-d61b8eb99709.png" alt class="image--center mx-auto" /></p>
<p>The result will look like this, allowing you to open the AWS Console with a single click using the displayed button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739371485359/69bb8bfe-51d5-4fd5-a488-9a74a08b309f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739371777969/e20fd363-3f5a-4bb5-95b8-89ac01dccd4f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739371986553/1a91fa76-775a-4073-8353-44674c26d5d8.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">CloudGlance also includes features for managing bastion hosts. If you're interested in this topic, let me know in the comments, and I'll write another blog about it.</div>
</div>

<p>Site:</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html">https://cloudglance.dev</a><a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html">/</a></p>
<p>Docs:</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html">https://docs.cloudglance.dev/</a></p>
<p>Repo:</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html">https://github.com/Systan</a><a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html">ics/CloudGlance</a></p>
<hr />
<h2 id="heading-granted-cli-based-aws-account-manager">Granted — CLI-Based AWS Account Manager</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739136151790/faa6c009-91dd-479a-8091-e06e45ea6b8f.png" alt class="image--center mx-auto" /></p>
<p>Granted is a command line interface (CLI) tool which simplifies access to cloud roles and allows multiple cloud accounts to be opened in your web browser simultaneously. The goals of Granted are:</p>
<ul>
<li><p>Provide a fast experience around finding and assuming roles</p>
</li>
<li><p>Leverage native browser functionality to allow multiple accounts to be accessed at once</p>
</li>
<li><p>Encrypt cached credentials to avoid plaintext SSO tokens being saved on disk</p>
</li>
</ul>
<h3 id="heading-installation-1">Installation</h3>
<p>Make sure you have installed Firefox.</p>
<p>Also, make sure that the Granted Firefox extension is also installed:</p>
<ul>
<li><a target="_blank" href="https://addons.mozilla.org/en-GB/firefox/addon/granted/">https://addons.mozilla.org/en-GB/firefox/addon/granted</a></li>
</ul>
<p>AWS CLI v2 should also be installed and it should be configured as shown in the previous section.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you prefer to use AWS SSO, then run <code>aws configure sso</code> and it will walk you through the process of setting up your profile file.</div>
</div>

<p>Now you are ready to install Granted!</p>
<p>To install it on Windows:</p>
<ul>
<li><a target="_blank" href="https://releases.commonfate.io/granted/v0.36.2/granted_0.36.2_windows_x86_64.zip">https://releases.commonfate.io/granted/v0.36.2/granted_0.36.2_windows_x86_64.zip</a></li>
</ul>
<p>To install it on Mac:</p>
<pre><code class="lang-bash">brew tap common-fate/granted
brew install granted
</code></pre>
<p>To install it on Linux:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># install GPG</span>
sudo apt update &amp;&amp; sudo apt install gpg

<span class="hljs-comment"># download the Common Fate Linux GPG key</span>
wget -O- https://apt.releases.commonfate.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/common-fate-linux.gpg

<span class="hljs-comment"># you can check the fingerprint of the key by running</span>
<span class="hljs-comment"># gpg --no-default-keyring --keyring /usr/share/keyrings/common-fate-linux.gpg --fingerprint</span>
<span class="hljs-comment"># the fingerprint of our Linux Releases key is 783A 4D1A 3057 4D2A BED0 49DD DE9D 631D 2D1D C944</span>

<span class="hljs-comment"># add the Common Fate APT repository</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/usr/share/keyrings/common-fate-linux.gpg] https://apt.releases.commonfate.io stable main"</span> | sudo tee /etc/apt/sources.list.d/common-fate.list

<span class="hljs-comment"># update your repositories</span>
sudo apt update

<span class="hljs-comment"># install Granted</span>
sudo apt install granted

<span class="hljs-comment"># verify your installation</span>
granted -v
</code></pre>
<h3 id="heading-configuration-1">Configuration</h3>
<p>After installing <strong>Granted</strong>, open your terminal and run the <code>assume</code> command. Follow the prompts and provide answers based on your requirements or the provided screenshot.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739373880659/37bb2f5b-c7f0-4b4d-8658-797d9816b1d7.png" alt class="image--center mx-auto" /></p>
<p>Then, run the <code>assume</code> command again to configure your profile.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739373972928/68200dc6-e519-4c57-903e-da42956b4cc9.png" alt class="image--center mx-auto" /></p>
<p>If you're using <strong>Linux</strong>, you'll be prompted to enter a password to create a new keyring. Enter your password and ensure you save it somewhere for future reference.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739373556475/f7fbfc20-7ec8-4e14-882b-f012dcd8ca91.png" alt class="image--center mx-auto" /></p>
<p>Finally, to open <strong>Firefox</strong> and access different accounts in separate isolated tabs:</p>
<pre><code class="lang-bash">assume -c &lt;profile-name&gt;
<span class="hljs-comment"># Examples:</span>
assume -c shahin-new
assume -c shahin-old
</code></pre>
<p>You can also use <code>assume -c</code> with the profile selector to quickly choose and open any AWS account you need:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739374601297/d8b2a130-b94e-43ed-a30c-fa9d91919173.png" alt class="image--center mx-auto" /></p>
<p>The result will be the same as CloudGlance:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739374354756/014ab5fd-68dc-41e8-831a-621637005ee9.png" alt class="image--center mx-auto" /></p>
<p>Additional options:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Opening the console with a specific region</span>
assume -c -r ap-southeast-1
<span class="hljs-comment"># or</span>
assume -c -r ap-southeast-1 role-a

<span class="hljs-comment"># Opening the console to a specific service. -s stands for service</span>
assume -s iam
</code></pre>
<h3 id="heading-granted-container-cleanup">Granted container cleanup</h3>
<p>The Granted Firefox extension includes a menu where you can view and clear your tab containers. The menu should appear next to the settings icon as shown below.</p>
<p>Clicking on the icon shows a menu where you can clear your Granted tab containers, as shown below. This is useful if you have roles which you are no longer accessing and you’d like to declutter your tab container list.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739374853780/34b3fde7-f902-4f2b-a028-8e1c444a9b83.png" alt class="image--center mx-auto" /></p>
<p>Granted Official Website:</p>
<p><a target="_blank" href="https://www.granted.dev/">https://www.granted.dev</a></p>
<p>Docs:</p>
<p><a target="_blank" href="https://docs.commonfate.io/granted/introduction">https://d</a><a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html">ocs.commonfate.io/granted/introduction</a></p>
<p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html">Repo:</a></p>
<p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html">https://github.com/common-fate/granted</a></p>
<hr />
<h2 id="heading-using-firefox-extensions-quick-solution">Using Firefox Extensions — Quick Solution</h2>
<h3 id="heading-container-tab-groups"><strong>Container Tab Groups</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739126418906/8f44a582-f2dd-4ed7-b6a0-a4162da86e0e.png" alt class="image--center mx-auto" /></p>
<p>This extension serves as an alternative to <a target="_blank" href="https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/">Firefox's official extension</a>, offering additional features that are worth highlighting.</p>
<p>Download page:</p>
<ul>
<li><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html">https://addons.</a><a target="_blank" href="https://addons.mozilla.org/en-US/firefox/addon/container-tab-groups/">mozilla.org/en-US/firefox/addon/co</a><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html">ntainer-tab-groups</a></li>
</ul>
<p>After installation click the “Get Started” or close the side bar.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739126724006/d134994e-4f97-4b13-8d33-dbed40b28d93.png" alt class="image--center mx-auto" /></p>
<p>There are multiple ways to fully utilize the <strong>Container Tab Group</strong> extension. One option is the <strong>compact sidebar</strong>, while another is the <strong>panorama grid</strong>, an expanded and more user-friendly version that simplifies managing tab groups via containers.</p>
<p>To open the sidebar:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739127127251/9d8771ce-eaf1-4e51-a844-aaad8a43de51.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739127343227/f3cb5c91-f67b-4e8a-b611-d578755221bc.png" alt class="image--center mx-auto" /></p>
<p>To open the panorama grid:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739127162060/6e9a0b3e-7ffb-4c84-9a1e-c54cf81f183d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739127371352/ea85130b-ee7b-4622-b96d-5cc5825c8fa2.png" alt class="image--center mx-auto" /></p>
<p>To manage multiple AWS accounts using this extension, open the <strong>sidebar</strong>, create as many containers as needed, and name them accordingly. Below is an example of two projects, each containing three accounts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739128584695/29ed39c8-a6dc-44d9-beb3-5e5eab5534d3.png" alt class="image--center mx-auto" /></p>
<p>Alternative:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739126437114/2a14ad90-4753-4cd4-9c41-6b6b199dcc3d.png" alt class="image--center mx-auto" /></p>
<p>Download page:</p>
<p><a target="_blank" href="https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/">https://addons.mozilla.org/e</a><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html">n-US/firefox/addon/multi-</a><a target="_blank" href="https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/">account-containers/</a></p>
<hr />
<h2 id="heading-using-firefox-profiles">Using Firefox profiles</h2>
<p>A <strong>Firefox profile</strong> is a user-specific directory where Mozilla Firefox stores personal data, settings, extensions, bookmarks, history, cookies, and preferences. Each profile operates independently, allowing multiple users to have different browsing environments on the same system.</p>
<h3 id="heading-key-features">Key Features</h3>
<ul>
<li><p><strong>Isolation:</strong> Each profile has its own set of data, preventing interference between multiple users.</p>
</li>
<li><p><strong>Customization:</strong> Profiles store user-specific preferences, including extensions, themes, and saved logins.</p>
</li>
<li><p><strong>Performance Optimization:</strong> Creating a fresh profile can help resolve browser issues such as slow performance or crashes.</p>
</li>
<li><p><strong>Multi-Profile Management:</strong> Firefox allows users to create and switch between multiple profiles using the <strong>Profile Manager</strong>.</p>
</li>
</ul>
<h3 id="heading-profile-location">Profile Location</h3>
<p>The profiles are stored in different locations depending on the operating system:</p>
<ul>
<li><p><strong>Windows:</strong></p>
<p>  <code>C:\Users\&lt;YourUsername&gt;\AppData\Roaming\Mozilla\Firefox\Profiles\</code></p>
</li>
<li><p><strong>Linux:</strong></p>
<p>  <code>~/.mozilla/firefox/</code></p>
</li>
<li><p><strong>macOS:</strong></p>
<p>  <code>~/Library/Application Support/Firefox/Profiles/</code></p>
</li>
</ul>
<h3 id="heading-managing-profiles">Managing Profiles</h3>
<p>Open <strong>Firefox Profile Manager</strong>:</p>
<ul>
<li><p>Run <code>about:profiles</code> in Firefox address bar.</p>
</li>
<li><p>Click “Create a New Profile”</p>
</li>
<li><p>Click Next</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739126516395/5131e671-3c2f-42e0-a4c1-d2d9d694f0fa.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Choose a proper name, create a folder, choose it as your folder, and click finish</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739126559560/ab270899-d83d-4f67-8970-24b5df758d04.png" alt class="image--center mx-auto" /></p>
<p>Then you can open, rename, or delete profiles as needed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739126588056/6f9a5936-2eb5-4d1f-b6f6-b57ff0b90fe4.png" alt class="image--center mx-auto" /></p>
<p>You can assign specific profiles for different projects and their stages such as Dev, Staging, or Prod.</p>
<p>You can find more info about Firefox profile manager here:</p>
<p><a target="_blank" href="https://support.mozilla.org/en-US/kb/profile-manager-create-remove-switch-firefox-profiles">https://support.mozilla.org/en-US/kb/profile-manager-create-remove-switch-firefox-profiles</a></p>
<hr />
<h2 id="heading-aws-extend-switch-roles"><strong>AWS Extend Switch Roles</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739136020662/ee59b2c2-206d-4ff9-9227-ecdc6d9f20b7.png" alt class="image--center mx-auto" /></p>
<p>When managing multiple AWS accounts or assuming different IAM roles, constantly switching between them in the AWS Console can be tedious. The <strong>AWS Extend Switch Roles</strong> extension for Firefox and Chrome makes this process seamless by allowing you to quickly switch roles without manually re-entering credentials.</p>
<p>This browser extension enhances the native <strong>AWS role-switching functionality</strong>, providing a cleaner UI, color-coded role indicators, and the ability to save multiple role configurations for quick access. In this section, we'll explore how to install, configure, and use AWS Extend Switch Roles effectively to boost productivity in multi-account AWS environments.</p>
<h3 id="heading-installation-2">Installation</h3>
<p>You can install this extension on both Chrome-based browsers and Firefox.</p>
<p>Download Chrome extension:</p>
<ul>
<li><a target="_blank" href="https://chromewebstore.google.com/detail/AWS%20Extend%20Switch%20Roles/jpmkfafbacpgapdghgdpembnojdlgkdl">https://chromewebstore.google.com/detail/AWS%20Extend%20Switch%20Roles/jpmkfafbacpgapdghgdpembnojdlgkdl</a></li>
</ul>
<p>Download Firefox extension:</p>
<ul>
<li><a target="_blank" href="https://addons.mozilla.org/en-US/firefox/addon/aws-extend-switch-roles3/">https://addons.mozilla.org/en-US/firefox/addon/aws-extend-switch-roles3</a></li>
</ul>
<h3 id="heading-configuration-2">Configuration</h3>
<p>After installation, <strong>left-click</strong> on the extension logo and navigate to the <strong>configuration section</strong> of the extension.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739376457915/306f0746-0763-4d1b-93ed-377cee8c3c05.png" alt class="image--center mx-auto" /></p>
<p>Enter your <strong>Role ARN</strong> and <strong>Region</strong>, configure the extension in <strong>INI format</strong>, and press <strong>Save</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739376676454/fb3f49ad-afd0-4c68-ad4b-67f6594ed2dd.png" alt class="image--center mx-auto" /></p>
<p>Go to the account from which you want to switch roles, and the extension will begin functioning.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739376791297/20befdbc-d588-4833-9fcd-a04944601bf9.png" alt class="image--center mx-auto" /></p>
<p>From this point, you can switch between as many roles as needed.</p>
<hr />
<h2 id="heading-aws-builtin-multi-session-support-new-feature">AWS Builtin Multi-Session Support — New Feature</h2>
<p>On 16 Jan 2025 AWS announced multi-session support, which enables AWS customers to access multiple AWS accounts simultaneously in the AWS Console. AWS Customers can sign-in to up to 5 sessions in a single browser, and this can be any combination of root, IAM, or federated roles in different accounts or in the same account.</p>
<p>When you enable multi-session support, the console URL contains a subdomain (for example, <a target="_blank" href="https://000000000000-aaaaaaaa.us-east-1.console.aws.amazon.com/console/home?region=us-east-1"><code>https://000000000000-aaaaaaaa.us-east-1.console.aws.amazon.com/console/home?region=us-east-1</code></a>). Be sure to update your bookmarks and console links.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739386615903/6b0b90c7-6ccb-4bf9-9b1c-72bee6c51c60.png" alt class="image--center mx-auto" /></p>
<p>You must opt-in to multi-session support by choosing <strong>Turn on multi-session</strong> in the account menu in the AWS Management Console, or by choosing <strong>Enable multi-session</strong> on <a target="_blank" href="https://console.aws.amazon.com/">https://console.aws.amazon.com/</a>. You can opt-out of multi-sessions at any time by choosing <strong>Disable multi-session</strong> on <a target="_blank" href="https://console.aws.amazon.com/">https://console.aws.amazon.com/</a> or by clearing your browser cookies. Opt-in is browser-specific.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739386741679/9907b4f6-5580-4b48-a21d-2930aa8b776a.png" alt class="image--center mx-auto" /></p>
<p>With add session, you can add up to 5 different accounts.</p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<h3 id="heading-table-of-comparison">Table of Comparison</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>CloudGlance</td><td>Granted</td><td>Firefox Extensions</td><td>AWS Extend Switch Roles</td><td>AWS Built-in Multi-Account</td></tr>
</thead>
<tbody>
<tr>
<td>User Interface</td><td>GUI-based</td><td>CLI-based</td><td>GUI-based</td><td>Browser Extension (Firefox &amp; Chrome)</td><td>GUI-based</td></tr>
<tr>
<td>Primary Usage</td><td>Multi-account AWS management</td><td>Fast role-switching &amp; multi-account AWS access</td><td>Basic AWS multi-account separation</td><td>Quick IAM role switching in AWS Console</td><td>Native AWS Console multi-session</td></tr>
<tr>
<td>AWS Account Management</td><td>Group &amp; visualize multiple AWS accounts</td><td>Assume multiple roles quickly</td><td>Manages AWS sessions via browser containers</td><td>Switch between multiple IAM roles easily</td><td>Up to 5 accounts managed simultaneously</td></tr>
<tr>
<td>Multi-Session Support</td><td>Yes (using Firefox Containers)</td><td>Yes (via browser session handling)</td><td>Yes (via containerization)</td><td>No native multi-session, but quick role-switching</td><td>Yes (up to 5 sessions in one browser)</td></tr>
<tr>
<td>Authentication Methods</td><td>IAM User, IAM Role, Federated Login, AWS SSO</td><td>IAM Role, AWS SSO</td><td>Browser-based authentication</td><td>IAM Role (configured via INI format)</td><td>Root, IAM User, Federated Login</td></tr>
<tr>
<td>AWS Console Access</td><td>One-click console access</td><td>Open AWS console via CLI</td><td>AWS console access via container tabs</td><td>Quick AWS Console access via role-switching</td><td>Native AWS console integration</td></tr>
<tr>
<td>SSH &amp; Port Forwarding</td><td>Supports bastions &amp; AWS SSM</td><td>No direct SSH support</td><td>No SSH support</td><td>No SSH support</td><td>No SSH support</td></tr>
<tr>
<td>Team Collaboration</td><td>Built-in Git support for team collaboration</td><td>No built-in team collaboration features</td><td>No team collaboration</td><td>No team collaboration features</td><td>No team collaboration</td></tr>
<tr>
<td>Browser Compatibility</td><td>Firefox (via extensions)</td><td>Firefox (via extension)</td><td>Firefox only</td><td>Firefox, Chrome</td><td>Any modern browser</td></tr>
<tr>
<td>Installation Complexity</td><td>Medium (requires AWS CLI v2, Firefox setup)</td><td>Medium (requires AWS CLI v2, terminal setup)</td><td>Low (simple extension installation)</td><td>Low (simple browser extension setup)</td><td>Very Low (enabled in AWS Console)</td></tr>
<tr>
<td>Encryption of Credentials</td><td>Yes (can encrypt ~/.aws/credentials)</td><td>Yes (encrypts cached credentials)</td><td>No built-in encryption</td><td>No built-in encryption</td><td>No built-in encryption</td></tr>
<tr>
<td>Free or Paid</td><td>Free for now (might become paid)</td><td>Free</td><td>Free</td><td>Free</td><td>Free</td></tr>
</tbody>
</table>
</div><h3 id="heading-final-thoughts">Final Thoughts</h3>
<ul>
<li><p><strong>CloudGlance</strong> is best for <strong>teams and DevOps professionals</strong> managing <strong>multiple AWS accounts with SSH access and port forwarding needs</strong>.</p>
</li>
<li><p><strong>Granted</strong> is <strong>perfect for security-conscious CLI users</strong> who need <strong>quick and encrypted role-switching</strong>.</p>
</li>
<li><p><strong>Firefox Extensions</strong> offer a <strong>simple, container-based solution</strong> for <strong>isolating AWS sessions</strong>, best suited for <strong>browser-heavy workflows</strong>.</p>
</li>
<li><p><strong>AWS Extend Switch Roles</strong> is a <strong>convenient, browser-based IAM role-switcher</strong>, best for <strong>users frequently switching roles inside AWS Console</strong>.</p>
</li>
<li><p><strong>AWS Built-in Multi-Account Manager</strong> is the <strong>easiest way to manage multiple AWS accounts within AWS Console</strong> but lacks flexibility beyond basic session handling.</p>
</li>
</ul>
<p>Each tool has its <strong>own strengths</strong>, and the choice depends on <strong>whether you prioritize GUI, CLI, multi-session handling, SSH support, or security</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[How to Install & Run DeepSeek R1 Locally with GUI on Windows, Linux, and macOS | Step-by-Step Guide]]></title><description><![CDATA[What is Deepseek R1 model?
DeepSeek-R1 is an advanced open-source artificial intelligence model developed by the Chinese startup DeepSeek. It is designed to excel in complex reasoning tasks, including mathematics, coding, and logical problem-solving....]]></description><link>https://devopsdetours.com/how-to-install-run-deepseek-r1-locally-with-gui-on-windows-linux-and-macos-step-by-step-guide</link><guid isPermaLink="true">https://devopsdetours.com/how-to-install-run-deepseek-r1-locally-with-gui-on-windows-linux-and-macos-step-by-step-guide</guid><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Sat, 01 Feb 2025 11:31:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738409332065/5c166bca-c4a6-4aac-888e-896e7d70fd7e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-deepseek-r1-model">What is Deepseek R1 model?</h2>
<p>DeepSeek-R1 is an advanced open-source artificial intelligence model developed by the Chinese startup DeepSeek. It is designed to excel in complex reasoning tasks, including mathematics, coding, and logical problem-solving. Notably, DeepSeek-R1 achieves performance comparable to leading models like OpenAI's o1, but with significantly lower development costs and computational requirements.</p>
<p><strong>Significance of DeepSeek-R1:</strong></p>
<ul>
<li><p><strong>Cost Efficiency:</strong> Developed with a budget of less than $6 million, DeepSeek-R1 challenges the high-cost approaches of competitors, making advanced AI more accessible.</p>
</li>
<li><p><strong>Open-Source Accessibility:</strong> By open-sourcing DeepSeek-R1, DeepSeek promotes transparency and collaboration, allowing researchers and developers worldwide to study, modify, and enhance the model.</p>
</li>
<li><p><strong>Technological Impact:</strong> The model's emergence has prompted a reevaluation of AI development strategies, emphasizing efficiency and innovation over sheer computational power.</p>
</li>
</ul>
<p><strong>Advantages of Running DeepSeek-R1 Locally:</strong></p>
<ul>
<li><p><strong>Data Privacy:</strong> Processing data on local machines ensures that sensitive information remains secure, mitigating risks associated with transmitting data to external servers.</p>
</li>
<li><p><strong>Customization:</strong> Running the model locally allows for tailored modifications to meet specific project requirements, facilitating experimentation and optimization.</p>
</li>
<li><p><strong>Reduced Latency:</strong> Local deployment eliminates the need for internet-based API calls, resulting in faster response times crucial for real-time applications.</p>
</li>
<li><p><strong>Cost Savings:</strong> Operating the model on local hardware can reduce expenses related to cloud-based services and data transfer.</p>
</li>
</ul>
<hr />
<h2 id="heading-key-considerations-for-running-deepseek-r1-locally"><strong>Key Considerations for Running DeepSeek-R1 Locally</strong></h2>
<p>Before proceeding, keep the following DeepSeek-R1 models and their corresponding sizes in mind:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Parameters (B)</td><td>Size (GB)</td></tr>
</thead>
<tbody>
<tr>
<td>1.5B</td><td>1.1 GB</td></tr>
<tr>
<td>7B</td><td>4.7 GB</td></tr>
<tr>
<td>8B</td><td>4.9 GB</td></tr>
<tr>
<td>14B</td><td>9.0 GB</td></tr>
<tr>
<td>32B</td><td>20 GB</td></tr>
<tr>
<td>70B</td><td>43 GB</td></tr>
<tr>
<td>671B</td><td>404 GB</td></tr>
</tbody>
</table>
</div><p>When running DeepSeek-R1 locally on your computer, you should consider the following factors:</p>
<h3 id="heading-1-hardware-requirements"><strong>1. Hardware Requirements</strong></h3>
<ul>
<li><p><strong>VRAM (GPU Memory):</strong></p>
<ul>
<li><p>The different model sizes range from <strong>1.1GB (1.5B model)</strong> to <strong>404GB (671B model)</strong>.</p>
</li>
<li><p>If you have a consumer-grade GPU (e.g., RTX 3060, 3070, 4080), you should opt for the <strong>7B model (4.7GB VRAM required)</strong> or the <strong>8B model (4.9GB VRAM required)</strong>.</p>
</li>
<li><p>Larger models (14B, 32B, 70B) require more powerful GPUs with at least <strong>10GB+ VRAM</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>CPU and RAM:</strong></p>
<ul>
<li><p>A powerful CPU (e.g., <strong>AMD Ryzen 9 / Intel i9</strong>) is recommended for inference.</p>
</li>
<li><p>You will need <strong>at least double the VRAM size in system RAM</strong>. For example, if you run the <strong>7B model (4.7GB VRAM)</strong>, you should have <strong>at least 16GB RAM</strong> for smooth performance.</p>
</li>
</ul>
</li>
<li><p><strong>Storage:</strong></p>
<ul>
<li><p>Ensure you have enough disk space. The 7B model alone requires <strong>4.7GB</strong>, while the larger models (70B+) need <strong>hundreds of gigabytes</strong>.</p>
</li>
<li><p>An <strong>NVMe SSD</strong> is preferable for faster model loading.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-2-software-requirements"><strong>2. Software Requirements</strong></h3>
<ul>
<li><p><strong>CUDA / ROCm (for GPU Acceleration)</strong></p>
<ul>
<li><p>If you have an <strong>NVIDIA GPU</strong>, install the latest <strong>CUDA</strong> and <strong>cuDNN</strong>.</p>
<ul>
<li>When you install the NVIDIA GeForce driver, it typically includes the <strong>CUDA runtime libraries</strong>—the components needed to run CUDA-accelerated applications.</li>
</ul>
</li>
<li><p>For <strong>AMD GPUs</strong>, you will need <strong>ROCm</strong>.</p>
</li>
<li><p>If you use <strong>CPU-only inference</strong>, performance will be <strong>significantly slower</strong>.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-3-model-selection"><strong>3. Model Selection</strong></h3>
<ul>
<li><p>Choose a model that balances <strong>performance and hardware limitations</strong>:</p>
<ul>
<li><p><strong>1.5B:</strong> Very lightweight, suitable for older GPUs or CPU-only.</p>
</li>
<li><p><strong>7B / 8B:</strong> Good for mid-range GPUs with <strong>6GB+ VRAM</strong>.</p>
</li>
<li><p><strong>14B+:</strong> Requires <strong>high-end GPUs (e.g., RTX 3090, A100, H100)</strong>.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-4-optimization-amp-performance"><strong>4. Optimization &amp; Performance</strong></h3>
<ul>
<li><p><strong>Quantization</strong>: Reducing precision (e.g., <strong>GGUF 4-bit, 8-bit quantization</strong>) helps reduce VRAM usage.</p>
</li>
<li><p><strong>Batch Size / Context Length</strong>: Adjust to balance response quality and speed.</p>
</li>
<li><p><strong>Multi-GPU</strong>: If you have multiple GPUs, some inference frameworks support model sharding.</p>
</li>
</ul>
<hr />
<h2 id="heading-ollama-and-deekseek-installation">Ollama and Deekseek Installation</h2>
<p>Before starting with the installation process make sure that if you are using windows, your nvidia graphic card is up to date. you can download and install the latest version of the graphic card from here: <a target="_blank" href="https://www.nvidia.com/en-us/geforce/drivers/">https://www.nvidia.com/en-us/geforce/drivers/</a></p>
<p>To find out whether Ollama supports your GPU you can visit: <a target="_blank" href="https://github.com/ollama/ollama/blob/main/docs/gpu.md">https://github.com/ollama/ollama/blob/main/docs/gpu.md</a></p>
<p>First, install Ollama and let it run in the background:</p>
<ul>
<li><p>For Windows: <a target="_blank" href="https://ollama.com/download/windows">https://ollama.com/download/windows</a></p>
</li>
<li><p>For macOS: <a target="_blank" href="https://ollama.com/download/mac">https://ollama.com/download/mac</a></p>
</li>
<li><p>For Linux:</p>
</li>
</ul>
<pre><code class="lang-bash">curl -fsSL https://ollama.com/install.sh | sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738404798332/4df14a0c-efce-42c9-bd2a-7a2893ecb454.png" alt class="image--center mx-auto" /></p>
<p>Next, download and install the model version that best fits your needs based on the explanation above.</p>
<p>To do this:</p>
<ol>
<li><p>Visit Ollama’s DeepSeek-R1 Library: <a target="_blank" href="https://ollama.com/library/deepseek-r1">https://ollama.com/library/deepseek-r1</a></p>
</li>
<li><p>Choose your preferred model version (e.g., <strong>8B</strong>).</p>
</li>
<li><p>Copy the provided command and paste it into your terminal.</p>
</li>
</ol>
<p>This installation process is the same for <strong>Windows, macOS, and Linux</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738404494643/eb3c46a9-6dc8-4a36-91de-44db8a3dee89.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738405162788/224932c7-a329-42a5-b006-444f1cb06510.png" alt class="image--center mx-auto" /></p>
<p>At this stage, you can start using DeepSeek-R1 directly from the command line. However, to create a more <strong>ChatGPT-like experience</strong>, we will install <strong>AnythingLLM</strong> for an enhanced user interface.</p>
<hr />
<h2 id="heading-anythingllm-installation-and-configuration">AnythingLLM Installation and Configuration</h2>
<ul>
<li><p>Visit AnythingLLM Desktop: <a target="_blank" href="https://anythingllm.com/desktop">https://anythingllm.com/desktop</a></p>
</li>
<li><p>Download and install the appropriate version for <strong>Windows, Linux, or macOS</strong>.</p>
</li>
</ul>
<p>Configure AnythingLLM:</p>
<ul>
<li><p>Open <strong>AnythingLLM</strong> after installation.</p>
</li>
<li><p>Follow the configuration steps as shown in the screenshots below to set it up properly.</p>
</li>
</ul>
<p>This setup will enhance your experience by providing a <strong>ChatGPT-like interface</strong> for interacting with DeepSeek-R1 locally. 🚀</p>
<p>Go to settings:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738405537309/e5b41746-dea8-421f-88fc-0ab5b7d739b7.png" alt class="image--center mx-auto" /></p>
<p>Configure your LLM provider and then go back to its main page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738405685111/8e4e5cb0-2a60-4cff-b7b0-fd2adbc81c8a.png" alt class="image--center mx-auto" /></p>
<p>Create a new workspace, give it a name, and save it:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738405732270/242f3186-3980-46f5-903c-94a792b4e7c5.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738405782013/fc0b6280-abae-4545-b9a7-ada74fa74105.png" alt class="image--center mx-auto" /></p>
<p>Go to the settings of your workspace and configure it according to the screenshot:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738405960297/27f2b32e-c369-475e-968a-146893110cce.png" alt class="image--center mx-auto" /></p>
<p>After that choose default or new thread to start a new conversation:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738406171202/db08a211-e8be-49f7-a1d4-bc1193557144.png" alt class="image--center mx-auto" /></p>
<p>Congratulations! You now have a powerful <strong>OpenAI-O1-like model</strong> running locally on your machine! 🚀</p>
<hr />
<h2 id="heading-anythingllm-alternative">AnythingLLM Alternative</h2>
<p>If you're looking for an alternative to <strong>AnythingLLM</strong>, you can also use <strong>LM Studio</strong>. One key advantage of <strong>LM Studio</strong> is that you can <strong>install models directly from within the app</strong>, eliminating the need for manual downloads or additional setup.</p>
<h4 id="heading-how-to-get-started-with-lm-studio">How to Get Started with LM Studio</h4>
<ol>
<li><p><strong>Install LM Studio</strong> – Download and install the software from <a target="_blank" href="https://lmstudio.ai/">https://lmstudio.ai/</a></p>
</li>
<li><p><strong>Search for Your Model</strong> – Use the built-in search feature to find <strong>DeepSeek R1</strong> or any other model.</p>
</li>
<li><p><strong>Install &amp; Run</strong> – Click to install the model directly from the app and start chatting instantly.</p>
</li>
</ol>
<p>This makes LM Studio a <strong>convenient and user-friendly</strong> option for running local AI models with minimal hassle. 🚀</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738428990062/d54a009a-48eb-4d2b-bfce-6d1bfe138bc0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738429631252/5cfe0051-9334-4aa5-923b-456ba30986f4.png" alt class="image--center mx-auto" /></p>
<p>Then start asking questions:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738429755780/a3b216da-583d-4fcb-b87f-19cd384292ca.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-ollama-commands">Ollama commands</h2>
<p>To see the version of Ollama installed on your system:</p>
<pre><code class="lang-bash">ollama -v
</code></pre>
<p>To see a list of installed models with Ollama:</p>
<pre><code class="lang-plaintext">ollama list
</code></pre>
<p>To see how Deepseek is performing on your system:</p>
<pre><code class="lang-plaintext"># First run deepseek in teminal:
ollama run deepseek-r1:8b --verbose
# To exit chat mode:
/bye
</code></pre>
<p>Ask a question and check the stats at the end.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738407347421/1ca8e67a-1cd2-4740-967b-2de42fb8a033.png" alt class="image--center mx-auto" /></p>
<p>To determine if the model is running on your <strong>CPU or GPU</strong>, use the following command:</p>
<pre><code class="lang-bash">ollama ps
</code></pre>
<p>Output:</p>
<pre><code class="lang-bash">NAME              ID              SIZE      PROCESSOR          UNTIL
deepseek-r1:8b    28f8fd6cdc67    6.3 GB    26%/74% CPU/GPU    28 seconds from now
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The larger the model, the more VRAM it requires. If your GPU runs out of available memory, the system may offload part of the workload to the CPU, resulting in slower performance.</div>
</div>

<p>To make sure that your system has detected your GPU, you can check the server.log located at <code>C:\Users\&lt;username&gt;\AppData\Local\Ollama</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738408116020/8e78d84c-7c3a-492e-8010-dc940b74f710.png" alt class="image--center mx-auto" /></p>
<p>Feel free to drop any questions in the comments section—I’m happy to help! 😊</p>
]]></content:encoded></item><item><title><![CDATA[How to Remove Sensitive Data from Git History: 2 Tools Explained]]></title><description><![CDATA[Scenario
Accidentally committing sensitive information, such as API keys, passwords, or personal data like phone numbers, to a Git-based version control system can happen to anyone.
Now, imagine a situation where you’ve pushed an API key that cannot ...]]></description><link>https://devopsdetours.com/how-to-remove-sensitive-data-from-git-history-2-tools-explained</link><guid isPermaLink="true">https://devopsdetours.com/how-to-remove-sensitive-data-from-git-history-2-tools-explained</guid><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Sun, 12 Jan 2025 10:32:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736677764624/8df0de10-7be9-4b63-a568-38c42abc945f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-scenario">Scenario</h2>
<p>Accidentally committing sensitive information, such as API keys, passwords, or personal data like phone numbers, to a Git-based version control system can happen to anyone.</p>
<p>Now, imagine a situation where you’ve pushed an API key that cannot be regenerated, a password that cannot be reset, or—worse—a personal phone number that’s now publicly accessible. No one wants to deal with the hassle of purchasing a new SIM card or facing potential security risks simply because of an oversight.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Disclaimer: </strong>If you have accidentally exposed sensitive information, such as an API key, password, or access token, the first step should always be to revoke and regenerate the exposed credential immediately. This ensures that unauthorized access is prevented. This article focuses primarily on addressing cases where the sensitive data cannot be changed (e.g., personal identifiers, non-regenerable keys, phone numbers, etc…) and must be removed from a public repository as quickly as possible to mitigate potential risks. Following these steps will help ensure your repository does not retain publicly accessible sensitive data.</div>
</div>

<p>Fortunately, there are effective ways to address this issue and prevent sensitive information from being permanently exposed.</p>
<p><strong>Important Note Before Proceeding:</strong></p>
<blockquote>
<p>⚠️ <em>The following solutions involve rewriting commit history, which will modify all commit hashes. If your workflow depends on commit hashes, consider alternative approaches.</em></p>
<p>⚠️ <em>For teams, rewriting history can impact uncommitted changes made by your teammates. Ensure you coordinate with your team before proceeding.</em></p>
</blockquote>
<h2 id="heading-solution-1-bfg-repo-cleaner">Solution 1: BFG Repo-Cleaner</h2>
<p>The BFG Repo-Cleaner is a powerful Java-based tool that simplifies the process of removing sensitive information from your Git repository’s history. Below are two methods for setting it up:</p>
<h3 id="heading-the-automated-method"><strong>The Automated Method</strong></h3>
<p>To streamline the setup, use the following script, which has been tested on Ubuntu 24.04:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># The following script has been written and tested on ubuntu 24.04:</span>
curl -s https://raw.githubusercontent.com/shahinam2/bfg-repo-cleaner-auto-install/main/auto-install.sh -o auto-install.sh &amp;&amp; chmod +x auto-install.sh &amp;&amp; ./auto-install.sh
</code></pre>
<p>You can find the repository for this script at: <a target="_blank" href="https://github.com/shahinam2/bfg-repo-cleaner-auto-install">GitHub Repo</a>.</p>
<h3 id="heading-the-manual-method"><strong>The Manual Method</strong></h3>
<ol>
<li><p><strong>Download the BFG Repo-Cleaner</strong>:</p>
<ul>
<li>Visit the official website: <a target="_blank" href="https://rtyley.github.io/bfg-repo-cleaner/">BFG Repo-Cleaner</a>.</li>
</ul>
</li>
<li><p><strong>Install Java</strong>:</p>
<ul>
<li>Download and install Java (version 8 or later).</li>
</ul>
</li>
</ol>
<p>This method provides more control over the installation process and is ideal if you prefer a manual setup.</p>
<h3 id="heading-how-to-use-bfg-repo-cleaner">How to use BFG Repo-Cleaner</h3>
<blockquote>
<p>⚠️ <strong>Important:</strong> Before proceeding, ensure you have backed up your repository by cloning it into a separate directory. This will help prevent accidental data loss during the cleanup process.</p>
</blockquote>
<p>Suppose your repository contains a file named <code>credentials</code> that holds sensitive information, and this file was committed in <strong>commit number 2</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736676405794/d8c60c8e-a921-49f3-9ab2-f6b3b41b3abf.webp" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736676429742/f07c817e-9b07-4faa-adac-c01c7a1db6c0.webp" alt class="image--center mx-auto" /></p>
<h3 id="heading-remove-the-file"><strong>Remove the File</strong></h3>
<p>First, delete the file containing sensitive information from your local directory:</p>
<pre><code class="lang-bash">rm credentials
</code></pre>
<h3 id="heading-stage-and-commit-the-deletion"><strong>Stage and Commit the Deletion</strong></h3>
<p>Next, stage the change and create a new commit locally to remove the file:</p>
<pre><code class="lang-bash">git add .
git commit -m <span class="hljs-string">"remove the credentials file"</span>
</code></pre>
<p>This ensures that the sensitive file is no longer part of your working directory or future commits.</p>
<h3 id="heading-using-bfg-repo-cleaner"><strong>Using BFG Repo-Cleaner</strong></h3>
<p>Assuming you are already in the directory where <strong>BFG Repo-Cleaner</strong> is located, use the following command to rewrite the repository history and remove the sensitive file:</p>
<pre><code class="lang-bash">java -jar bfg-1.14.0.jar /path/to/your/repo/ --delete-files file-to-remove
</code></pre>
<p><strong>Example</strong>:</p>
<p>If the file to be removed is named <code>credentials</code>, the command would look like this:</p>
<pre><code class="lang-bash">java -jar bfg-1.14.0.jar /home/shahin/bfg-tool-test/ --delete-files credentials
</code></pre>
<p>Output:</p>
<pre><code class="lang-plaintext">Using repo : /home/shahin/bfg-tool-test/.git

Found 2 objects to protect
Found 3 commit-pointing refs : HEAD, refs/heads/main, refs/remotes/origin/main

Protected commits
-----------------

These are your protected commits, and so their contents will NOT be altered:

 * commit 7b439b14 (protected by 'HEAD')

Cleaning
--------

Found 4 commits
Cleaning commits:       100% (4/4)
Cleaning commits completed in 30 ms.

Updating 2 Refs
---------------

        Ref                        Before     After   
        ----------------------------------------------
        refs/heads/main          | 7b439b14 | 1597a9c1
        refs/remotes/origin/main | d611e7c6 | e063add1

Updating references:    100% (2/2)
...Ref update completed in 38 ms.

Commit Tree-Dirt History
------------------------

        Earliest      Latest
        |                  |
          .    D    D    m  

        D = dirty commits (file tree fixed)
        m = modified commits (commit message or parents changed)
        . = clean commits (no changes to file tree)

                                Before     After   
        -------------------------------------------
        First modified commit | ab6b164d | a11e11a6
        Last dirty commit     | d611e7c6 | e063add1

Deleted files
-------------

        Filename      Git id          
        ------------------------------
        credentials | df20d103 (52 B )


In total, 5 object ids were changed. Full details are logged here:

        /home/shahin/bfg-tool-test.bfg-report/2025-01-11/23-01-56

BFG run is complete! When ready, run: git reflog expire --expire=now --all &amp;&amp; git gc --prune=now --aggressive
</code></pre>
<p><strong>What the Output Means:</strong></p>
<p><strong>Protected Commits</strong>:</p>
<ul>
<li><p>BFG preserved the commit <code>1bd6a8cb</code> because it was "protected by 'HEAD'." Protected commits are typically the latest commits in your repository to ensure no accidental data corruption.</p>
</li>
<li><p>If the <code>credentials</code> file exists in this protected commit, you need to clean it manually before running BFG again.</p>
</li>
</ul>
<p><strong>Cleaning Results</strong>:</p>
<ul>
<li><p>BFG identified and cleaned <strong>4 commits</strong> in your repository that had the <code>credentials</code> file in their history.</p>
</li>
<li><p>It updated <strong>2 references</strong> (<code>refs/heads/main</code> and <code>refs/remotes/origin/main</code>) to point to the rewritten history.</p>
</li>
</ul>
<p><strong>Commit Tree-Dirt History</strong>:</p>
<ul>
<li>Indicates which commits were modified or cleaned. <code>D</code> (dirty commits) were fixed by BFG.</li>
</ul>
<p><strong>Deleted Files</strong>:</p>
<ul>
<li>BFG successfully found and flagged the <code>credentials</code> file (<code>df20d103</code> is its Git ID). This indicates the file was removed from the Git history for the rewritten commits.</li>
</ul>
<h3 id="heading-steps-to-finalize-and-verify">Steps to Finalize and Verify</h3>
<p><strong>Run Garbage Collection</strong>:</p>
<p>After running BFG Repo-Cleaner, the process is <strong>not fully complete</strong>. To finalize the cleanup, you must remove any deleted objects from the Git repository by executing the garbage collection command provided in the output.</p>
<p>Run the following command to clean up your repository:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Ensure you are in the root directory of your repository before executing this command</span>
git reflog expire --expire=now --all &amp;&amp; git gc --prune=now --aggressive
</code></pre>
<p>Expected output:</p>
<pre><code class="lang-plaintext">Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (7/7), 612 bytes | 612.00 KiB/s, done.
Total 7 (delta 2), reused 7 (delta 2), pack-reused 0
remote: Resolving deltas: 100% (2/2), done.
To github.com:shahinam2/bfg-tool-test.git
 + d611e7c...1597a9c main -&gt; main (forced update)
</code></pre>
<p>This will permanently delete the orphaned objects (e.g., the <code>credentials</code> file) from the repository.</p>
<p>Verify Removal of Sensitive File:</p>
<p>Ensure that the sensitive file, such as <code>credentials</code>, has been completely removed from your Git history by running the following commands:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">log</span> --all --grep=<span class="hljs-string">"credentials"</span>
git grep <span class="hljs-string">"credentials"</span>
</code></pre>
<p>If both commands return no results, it means the file has been successfully removed after running BFG and performing garbage collection.</p>
<p><strong>Force Push to Remote</strong>: If your repository is hosted on a remote platform (e.g., GitHub), you’ll need to push the rewritten history using a force push:</p>
<pre><code class="lang-bash">git push origin --force
</code></pre>
<blockquote>
<p>⚠️ <strong>Important:</strong> Force-pushing will overwrite the repository history on the remote server. Be sure to notify all collaborators, as they will need to re-clone the repository to avoid conflicts.</p>
</blockquote>
<p><strong>Before Cleanup</strong>:</p>
<ul>
<li><p>The commit history shows <code>commit 2</code>, which contains the sensitive <code>credentials</code> file.</p>
</li>
<li><p>The file's content is visible within the repository.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736676620162/aa06b36a-5fcf-40e7-a1c2-ca4d74d79290.webp" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736679357471/d50e2331-dae5-4077-84cf-fbdad4d0d68d.webp" alt class="image--center mx-auto" /></p>
<p><strong>After Cleanup</strong>:</p>
<ul>
<li><p><code>commit 2</code> has been rewritten to remove the sensitive content using BFG Repo-Cleaner.</p>
</li>
<li><p>The sensitive <code>credentials</code> file is no longer visible in the repository history, as confirmed by the updated commit structure.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736676628982/5c591842-670a-46f2-85b2-8684009e1348.webp" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736679384820/65c72af6-a173-4d59-baf7-e42fd67d5e35.webp" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-solution-2-git-filter-repo"><strong>Solution 2: Git-Filter-Repo</strong></h2>
<p><code>git-filter-repo</code> offers several advantages over BFG Repo-Cleaner, making it a more versatile and efficient choice for rewriting Git history:</p>
<ol>
<li><p><strong>Flexibility</strong>:</p>
<p> Provides comprehensive features for history rewriting, unlike BFG, which focuses on specific tasks like removing sensitive data.</p>
</li>
<li><p><strong>No Java Dependency</strong>:</p>
<p> Python-based and lighter on resources, whereas BFG requires Java Runtime Environment.</p>
</li>
<li><p><strong>No Protected Commits</strong>:</p>
<p> Can modify all commits, including the latest, ensuring complete data removal. BFG protects the latest commit by default, requiring manual cleanup.</p>
</li>
<li><p><strong>Speed</strong>:</p>
<p> Optimized for performance and generally faster for large repositories compared to BFG.</p>
</li>
<li><p><strong>Active Maintenance</strong>:</p>
<p> Regularly updated with detailed documentation, ensuring compatibility with modern Git features.</p>
</li>
<li><p><strong>Customizable Outputs</strong>:</p>
<p> Generates detailed logs and mappings for auditing, unlike BFG’s simpler reporting.</p>
</li>
<li><p><strong>Core Git Integration</strong>:</p>
<p> Relies on core Git functionality, making it portable and easier to integrate into workflows.</p>
</li>
<li><p><strong>Broad Use Cases</strong>:</p>
<p> Versatile enough for various history-rewriting tasks, not limited to removing sensitive data.</p>
</li>
</ol>
<h3 id="heading-how-to-use-git-filter-repo"><strong>How to Use Git-Filter-Repo</strong></h3>
<p><strong>Install</strong> <code>git-filter-repo</code>:</p>
<p>If not already installed, use the following command:</p>
<pre><code class="lang-bash">pip install git-filter-repo
</code></pre>
<p><strong>Backup Your Repository:</strong></p>
<p>Before making changes, back up your repository to prevent data loss:</p>
<pre><code class="lang-bash">cp -r /path/to/your/repo /path/to/your/repo-backup

<span class="hljs-comment"># Example:</span>
cp -r /home/shahin/bfg-tool-test /home/shahin/bfg-tool-test-backup
</code></pre>
<p><strong>Remove the File:</strong></p>
<p>Run the following command to remove all instances of the <code>credentials</code> file from your Git history:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Assuming that the credential file is in the current folder, otherwise provide the full path</span>
git filter-repo --sensitive-data-removal --invert-paths --path credentials
</code></pre>
<p><code>-path credentials</code>: Targets the <code>credentials</code> file.</p>
<p><code>-invert-paths</code>: Removes the targeted file from the repository's history.</p>
<p>This ensures the sensitive file is completely removed from the repository while keeping other data intact.</p>
<p><strong>Force-Push the Updated Repository:</strong></p>
<p>If your repository is hosted on a remote platform (e.g., GitHub), you need to push the rewritten history with a force push:</p>
<pre><code class="lang-bash">git push origin --force
</code></pre>
<p><strong>Verify Removal:</strong></p>
<p>Run these commands to confirm the <code>credentials</code> file is no longer in the repository history:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">log</span> --all --grep=<span class="hljs-string">"credentials"</span>
git grep <span class="hljs-string">"credentials"</span>
</code></pre>
<p>No results indicate successful removal.</p>
<p><strong>Before Cleanup:</strong> Commit history shows <code>commit 2</code>, which contains the sensitive <code>credentials</code> file, with its content visible in the repository.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736676764099/c3be4e35-a31b-4027-9414-26bef22aca4f.webp" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736676768577/63f78822-157f-4c43-b36f-f8604c50bfc5.webp" alt class="image--center mx-auto" /></p>
<p><strong>After Cleanup:</strong></p>
<p>After running the cleanup steps, <code>commit 2</code> has been successfully removed from the history, ensuring the sensitive content is no longer accessible.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736676777074/da2a9ad2-fa70-4ade-b0ed-b07bb4bd441a.webp" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-git-guardian-prevention-over-cure"><strong>Git Guardian: Prevention Over Cure</strong></h2>
<p>Avoid the hassle of removing sensitive data from your repository by preventing it from being pushed in the first place. Tools like <strong>Git Guardian</strong> can monitor and block sensitive information from being added to Git-based platforms.</p>
<p>Visit Git Guardian: <a target="_blank" href="https://www.gitguardian.com/">https://www.gitguardian.com/</a></p>
<hr />
<h2 id="heading-sources"><strong>Sources</strong></h2>
<h4 id="heading-bfg-repo-cleaner"><strong>BFG Repo-Cleaner</strong></h4>
<ul>
<li><p><a target="_blank" href="https://github.com/rtyley/bfg-repo-cleaner">GitHub Repository</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=msUDPYsbABY">YouTube Tutorial</a></p>
</li>
</ul>
<h4 id="heading-git-filter-repo"><strong>Git-Filter-Repo</strong></h4>
<ul>
<li><p><a target="_blank" href="https://github.com/newren/git-filter-repo">GitHub Repository</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=KXPmiKfNlZE">YouTube Tutorial</a></p>
</li>
<li><p><a target="_blank" href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository#purging-a-file-from-your-local-repositorys-history-using-git-filter-repo">GitHub Guide</a></p>
</li>
</ul>
<p>Photo by <a target="_blank" href="https://unsplash.com/@katarina_kate?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Katarina Humajova</a> on <a target="_blank" href="https://unsplash.com/photos/a-young-boy-is-peeking-out-from-behind-a-gate-ifnEgAUoc5Y?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</p>
]]></content:encoded></item><item><title><![CDATA[How to login to Docker Desktop for Linux]]></title><description><![CDATA[Logging into Docker Desktop for Linux is not a straightforward process.
The Issue
The reason why you have landed here is most probably because of the following error:

Solution
First, create a gpg key:
gpg --generate-key

Expected output after answer...]]></description><link>https://devopsdetours.com/how-to-login-to-docker-desktop-for-linux</link><guid isPermaLink="true">https://devopsdetours.com/how-to-login-to-docker-desktop-for-linux</guid><category><![CDATA[Docker]]></category><category><![CDATA[Linux]]></category><category><![CDATA[DockerDesktop]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Sat, 21 Dec 2024 20:30:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734813338663/78c4809a-bc15-4aaf-8022-dcbf654dc731.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Logging into Docker Desktop for Linux is not a straightforward process.</p>
<h3 id="heading-the-issue">The Issue</h3>
<p>The reason why you have landed here is most probably because of the following error:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734812308890/b5402888-8891-48a0-b4ee-118588b333bf.png" alt="Unable to log in. You must initialize pass before logging in to Docker Desktop." class="image--center mx-auto" /></p>
<h3 id="heading-solution">Solution</h3>
<p>First, create a gpg key:</p>
<pre><code class="lang-bash">gpg --generate-key
</code></pre>
<p>Expected output after answering all questions:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734812681058/b9dc672b-67ff-4500-b303-a8eef82eb5b3.png" alt class="image--center mx-auto" /></p>
<p>Then run pass init with the string shown in the screenshot.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734812773527/f9bbad19-7ad4-489d-bb43-be313baa76e7.png" alt class="image--center mx-auto" /></p>
<p>Now the sign-in button works as expected.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734812931299/a41a1aa3-29bd-462e-8912-18bde460be7f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734812949899/8fa278dc-9203-46c6-9b22-e64a9d93026b.png" alt class="image--center mx-auto" /></p>
<p>Sources:</p>
<p><a target="_blank" href="https://docs.docker.com/desktop/get-started/#credentials-management-for-linux-users">https://docs.docker.com/desktop/get-started/#credentials-management-for-linux-users</a></p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=AMcvwqvgU5U&amp;t=590s">https://www.youtube.com/watch?v=AMcvwqvgU5U&amp;t=590s</a></p>
]]></content:encoded></item><item><title><![CDATA[Separating Myths from Reality: The Worth of IT Certifications]]></title><description><![CDATA[Are IT Certifications Still Worth It?
In recent years, platforms like LinkedIn have been rife with debates about the value of IT certifications. Many senior professionals dismiss certifications as worthless. But how accurate is this claim? Let’s take...]]></description><link>https://devopsdetours.com/separating-myths-from-reality-the-worth-of-it-certifications</link><guid isPermaLink="true">https://devopsdetours.com/separating-myths-from-reality-the-worth-of-it-certifications</guid><category><![CDATA[IT]]></category><category><![CDATA[Certification]]></category><category><![CDATA[worth]]></category><category><![CDATA[evaluations]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 13 Dec 2024 08:32:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738698968603/b78e38a8-b529-4cb3-ab25-93cb5938f643.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-are-it-certifications-still-worth-it">Are IT Certifications Still Worth It?</h3>
<p>In recent years, platforms like LinkedIn have been rife with debates about the value of IT certifications. Many senior professionals dismiss certifications as worthless. But how accurate is this claim? Let’s take a closer look at both sides of the argument and unpack why IT certifications can still hold significant value—if approached correctly.</p>
<hr />
<h3 id="heading-why-the-critics-are-wrong">Why the Critics Are Wrong</h3>
<p>To illustrate the value of IT certifications, let me share a personal experience from my career.</p>
<p>In 2021, I transitioned from being a network engineer to a DevOps professional. At the time, my Linux knowledge was minimal—essentially limited to using the <code>ls</code> command. Realizing the importance of Linux in my new role, I decided to approach my learning journey methodically. Instead of relying on random YouTube videos or scattered online tutorials, I opted to pursue the CompTIA Linux+ certification. This decision added structure and focus to my learning process.</p>
<h4 id="heading-a-strategic-approach-to-learning">A Strategic Approach to Learning</h4>
<p>The first step was selecting the right learning materials. After some research, I found <a target="_blank" href="http://itpro.tc">ITProTV</a>, where Don Pezet’s teaching style resonated perfectly with me. His lessons provided clarity, making complex topics approachable.</p>
<p>While studying, I built my own lab environment using VMware Workstation and multiple Linux distributions. This hands-on approach allowed me to experiment, break systems, and troubleshoot issues—creating a dynamic “playground” for learning. Over the course of three months, I developed a deep understanding of Linux concepts by repeatedly applying what I learned.</p>
<h4 id="heading-reinforcing-knowledge-through-practice">Reinforcing Knowledge Through Practice</h4>
<p>Once I completed the study material, I began preparing for the Linux+ exam with practice tests. For the questions I didn’t fully understand, I recreated the scenarios in my lab environment to grasp the underlying problems and solutions. This targeted approach not only prepared me for the exam but also reinforced my practical skills.</p>
<p>By the time I passed the Linux+ exam on my first attempt, I had built a solid foundation of Linux knowledge—one that would later prove invaluable in real-world scenarios.</p>
<h4 id="heading-career-impact">Career Impact</h4>
<p>Shortly after earning my Linux+ certification, I landed a job as a 2nd Level IT Support Engineer (DevOps Engineer). My manager explicitly stated that my Linux+ certification was a key factor in their hiring decision, as it demonstrated both my technical competence and my commitment to self-improvement.</p>
<p>While the challenges I faced at work often extended beyond what I had learned, the strong foundation I built enabled me to research and resolve complex issues effectively. This underscores the value of certifications when pursued correctly.</p>
<hr />
<h3 id="heading-when-the-critics-are-right">When the Critics Are Right</h3>
<p>Unfortunately, not everyone approaches certifications with the same dedication. I’ve encountered individuals who, despite having no prior cloud experience, quickly amassed multiple AWS and Azure certifications by relying solely on exam dumps. By memorizing answers without gaining practical knowledge, they managed to pass exams but lacked the skills required to perform in real-world environments.</p>
<p>The result? They often fail interviews and disappoint hiring managers, who then take to platforms like LinkedIn to denounce the value of IT certifications. These instances reflect a misuse of the certification process rather than an inherent flaw in certifications themselves.</p>
<hr />
<h3 id="heading-the-bottom-line">The Bottom Line</h3>
<p>IT certifications are only as valuable as the effort and approach you put into earning them. If you:</p>
<ul>
<li><p>Select comprehensive learning resources,</p>
</li>
<li><p>Dedicate time to hands-on practice and experimentation,</p>
</li>
<li><p>Use practice exams to bridge gaps in your knowledge,</p>
</li>
</ul>
<p>then your certification can become a powerful asset. It can open doors to new career opportunities and provide a strong foundation for mastering advanced technologies.</p>
<p>However, if you shortcut the process by relying solely on exam dumps and neglecting practical experience, your certification will hold little to no value.</p>
<hr />
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>When pursued with integrity and dedication, IT certifications remain a worthy investment. They not only validate your skills but also facilitate continued learning and career growth. For those willing to put in the effort, certifications can still be a game-changer.</p>
]]></content:encoded></item><item><title><![CDATA[Comparing Different Models of the Software Development Life Cycles (SDLC)]]></title><description><![CDATA[What is Software Development Life Cycles (SDLC)?
The Software Development Life Cycle (SDLC) is a structured process used for developing software systems. It defines a series of steps or phases to guide teams in planning, creating, testing, deploying,...]]></description><link>https://devopsdetours.com/comparing-different-models-of-the-software-development-life-cycles-sdlc</link><guid isPermaLink="true">https://devopsdetours.com/comparing-different-models-of-the-software-development-life-cycles-sdlc</guid><category><![CDATA[software development]]></category><category><![CDATA[lifecycle]]></category><category><![CDATA[SDLC Models]]></category><category><![CDATA[SDLC]]></category><category><![CDATA[Devops]]></category><category><![CDATA[v-model]]></category><category><![CDATA[agile development]]></category><category><![CDATA[Waterfall]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Sun, 08 Dec 2024 11:19:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733656436046/f10c8338-a129-467a-9147-41cf28f02800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-software-development-life-cycles-sdlc">What is Software Development Life Cycles (SDLC)?</h2>
<p>The <strong>Software Development Life Cycle (SDLC)</strong> is a structured process used for developing software systems. It defines a series of steps or phases to guide teams in planning, creating, testing, deploying, and maintaining software. SDLC helps ensure that the software is high-quality, meets user requirements, and is delivered within budget and on time.</p>
<hr />
<h2 id="heading-waterfall-model"><strong>Waterfall Model</strong></h2>
<p>The <strong>Waterfall Model</strong> is one of the earliest and simplest SDLC models, following a sequential approach where each phase must be completed before the next phase begins. It is linear and structured, making it easy to follow for smaller projects with well-defined requirements.</p>
<h3 id="heading-phases-in-the-waterfall-model"><strong>Phases in the Waterfall Model</strong></h3>
<ol>
<li><p><strong>Requirement Analysis:</strong></p>
<ul>
<li><p>Gather and document the requirements for the system.</p>
</li>
<li><p>Requirements are clear and unchanging.</p>
</li>
</ul>
</li>
<li><p><strong>System Design:</strong></p>
<ul>
<li><p>Translate the requirements into a detailed system architecture.</p>
</li>
<li><p>Define hardware and software specifications.</p>
</li>
</ul>
</li>
<li><p><strong>Implementation (Coding):</strong></p>
<ul>
<li><p>Develop the software based on the design specifications.</p>
</li>
<li><p>Testing for each module is conducted at this stage.</p>
</li>
</ul>
</li>
<li><p><strong>Integration and Testing:</strong></p>
<ul>
<li><p>Combine all modules and test the entire system.</p>
</li>
<li><p>Identify and fix defects.</p>
</li>
</ul>
</li>
<li><p><strong>Deployment:</strong></p>
<ul>
<li>Deliver the software to the customer or make it live in the production environment.</li>
</ul>
</li>
<li><p><strong>Maintenance:</strong></p>
<ul>
<li>Handle updates, fixes, and enhancements based on user feedback.</li>
</ul>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pros</strong></td><td><strong>Cons</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Simple and easy to understand and manage.</td><td>Inflexible; not suitable for projects where requirements may change.</td></tr>
<tr>
<td>Phases are completed one at a time, making it easier to track progress.</td><td>Cannot accommodate new requirements once the process starts.</td></tr>
<tr>
<td>Well-suited for smaller projects with clear and fixed requirements.</td><td>Testing is done at the end, increasing the risk of discovering major issues late.</td></tr>
<tr>
<td>Documentation is comprehensive and facilitates onboarding.</td><td>Not ideal for complex or long-term projects.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-agile-model"><strong>Agile Model</strong></h2>
<p>The <strong>Agile Model</strong> is an iterative and incremental software development methodology that emphasizes flexibility, collaboration, and customer involvement. Instead of completing the entire project in one go, Agile delivers the software in small, manageable parts called <strong>iterations</strong> or <strong>sprints</strong>, each lasting 1-4 weeks. At the end of each iteration, a functional and usable product increment is delivered.</p>
<p>Agile is designed to adapt to changing requirements, foster collaboration between cross-functional teams, and prioritize customer satisfaction by delivering high-quality software quickly and continuously.</p>
<h3 id="heading-phases-of-the-agile-model"><strong>Phases of the Agile Model</strong></h3>
<ol>
<li><p><strong>Concept or Planning:</strong></p>
<ul>
<li><p>Define high-level goals, vision, and a broad roadmap.</p>
</li>
<li><p>Identify key stakeholders and requirements.</p>
</li>
</ul>
</li>
<li><p><strong>Iteration Planning:</strong></p>
<ul>
<li><p>Plan the objectives and deliverables for the upcoming sprint.</p>
</li>
<li><p>Define a prioritized list of features or user stories (often tracked in a backlog).</p>
</li>
</ul>
</li>
<li><p><strong>Design and Development:</strong></p>
<ul>
<li><p>Develop small parts of the software during the sprint.</p>
</li>
<li><p>Use collaboration tools (e.g., Jira, Trello) to track progress.</p>
</li>
</ul>
</li>
<li><p><strong>Testing and Feedback:</strong></p>
<ul>
<li><p>Test the product increment at the end of each sprint.</p>
</li>
<li><p>Gather feedback from stakeholders or users to refine the product.</p>
</li>
</ul>
</li>
<li><p><strong>Release:</strong></p>
<ul>
<li><p>Deliver working software at the end of each sprint.</p>
</li>
<li><p>The release may go live (for the user) or stay in-house for further refinement.</p>
</li>
</ul>
</li>
<li><p><strong>Review and Retrospective:</strong></p>
<ul>
<li><p>Review the sprint's achievements and identify areas for improvement.</p>
</li>
<li><p>Plan the next sprint based on lessons learned.</p>
</li>
</ul>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pros</strong></td><td><strong>Cons</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Highly flexible and adaptable to changing requirements.</td><td>Requires close collaboration and skilled professionals.</td></tr>
<tr>
<td>Encourages continuous feedback and iteration.</td><td>Can be challenging to predict costs and timeframes.</td></tr>
<tr>
<td>Delivers working software quickly and frequently.</td><td>May lack documentation due to focus on incremental delivery.</td></tr>
<tr>
<td>Well-suited for complex projects and customer-focused solutions.</td><td>Can be chaotic without proper planning and leadership.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-devops-model"><strong>DevOps Model</strong></h2>
<p>The <strong>DevOps Model</strong> is a software development methodology that emphasizes collaboration between <strong>development (Dev)</strong> and <strong>operations (Ops)</strong> teams. The goal is to break down silos between these teams to enable continuous integration, delivery, and deployment of software. It focuses on automation, faster delivery, and improving the reliability of software systems.</p>
<p>The DevOps Model combines cultural philosophies, practices, and tools to accelerate the software delivery lifecycle while maintaining high quality, stability, and security.</p>
<h3 id="heading-phases-of-the-devops-lifecycle"><strong>Phases of the DevOps Lifecycle</strong></h3>
<ol>
<li><p><strong>Plan:</strong></p>
<ul>
<li><p>Define the product vision, requirements, and roadmap.</p>
</li>
<li><p>Use tools like Jira or Trello for project management.</p>
</li>
</ul>
</li>
<li><p><strong>Develop:</strong></p>
<ul>
<li><p>Write, build, and test code continuously.</p>
</li>
<li><p>Leverage CI tools like Jenkins, GitHub Actions, or GitLab CI.</p>
</li>
</ul>
</li>
<li><p><strong>Build:</strong></p>
<ul>
<li>Automate the creation of builds using tools like Maven, Gradle, or npm.</li>
</ul>
</li>
<li><p><strong>Test:</strong></p>
<ul>
<li><p>Run automated tests to verify the code's functionality and quality.</p>
</li>
<li><p>Tools like Selenium, TestNG, or JUnit are commonly used.</p>
</li>
</ul>
</li>
<li><p><strong>Release:</strong></p>
<ul>
<li><p>Deploy the code to staging or production environments.</p>
</li>
<li><p>Tools like Ansible, Chef, and Terraform enable automated deployments.</p>
</li>
</ul>
</li>
<li><p><strong>Deploy:</strong></p>
<ul>
<li><p>Continuously deliver updates to end-users without downtime.</p>
</li>
<li><p>Use tools like Kubernetes, Docker, and AWS ECS for container orchestration.</p>
</li>
</ul>
</li>
<li><p><strong>Operate:</strong></p>
<ul>
<li><p>Monitor the system's performance and reliability in production.</p>
</li>
<li><p>Tools like Prometheus, Grafana, and Datadog are commonly used.</p>
</li>
</ul>
</li>
<li><p><strong>Monitor:</strong></p>
<ul>
<li><p>Track system health, application performance, and user feedback.</p>
</li>
<li><p>Use alerts and dashboards to ensure system reliability.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-key-principles-of-the-devops-model"><strong>Key Principles of the DevOps Model</strong></h3>
<ol>
<li><p><strong>Collaboration and Communication:</strong></p>
<ul>
<li><p>Promotes a shared responsibility culture between development and operations teams.</p>
</li>
<li><p>Encourages open communication and shared objectives.</p>
</li>
</ul>
</li>
<li><p><strong>Automation:</strong></p>
<ul>
<li><p>Automates repetitive tasks like testing, integration, deployment, and monitoring.</p>
</li>
<li><p>Uses tools like Jenkins, Ansible, Docker, Kubernetes, and Terraform.</p>
</li>
</ul>
</li>
<li><p><strong>Continuous Integration (CI):</strong></p>
<ul>
<li><p>Developers frequently integrate code changes into a shared repository.</p>
</li>
<li><p>Automated tests verify the quality of the code.</p>
</li>
</ul>
</li>
<li><p><strong>Continuous Delivery (CD):</strong></p>
<ul>
<li><p>Ensures that software is always in a deployable state.</p>
</li>
<li><p>Automates the process of delivering changes to staging or production environments.</p>
</li>
</ul>
</li>
<li><p><strong>Infrastructure as Code (IaC):</strong></p>
<ul>
<li><p>Manages infrastructure (e.g., servers, networks) using code.</p>
</li>
<li><p>Tools like Terraform and AWS CloudFormation make infrastructure provisioning reliable and reproducible.</p>
</li>
</ul>
</li>
<li><p><strong>Monitoring and Feedback:</strong></p>
<ul>
<li><p>Uses tools like Prometheus, Grafana, and Splunk for real-time monitoring.</p>
</li>
<li><p>Feedback loops help identify and resolve issues quickly.</p>
</li>
</ul>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pros</strong></td><td><strong>Cons</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Encourages collaboration between development and operations teams.</td><td>Requires a cultural shift and investment in automation tools.</td></tr>
<tr>
<td>Automates processes, reducing errors and improving deployment speed.</td><td>Can be costly and time-intensive to implement initially.</td></tr>
<tr>
<td>Enhances scalability, reliability, and quality of software products.</td><td>Not ideal for small-scale projects or teams with limited resources.</td></tr>
<tr>
<td>Provides faster feedback, enabling quick fixes and improvements.</td><td>Can lead to over-reliance on automation if not managed carefully.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-iterative-model"><strong>Iterative Model</strong></h2>
<p>The <strong>Iterative Model</strong> is a software development approach in which a project is developed and refined through repeated cycles (iterations). Instead of delivering the final product in one go, the development process starts with a basic implementation of the requirements and gradually improves it with each iteration. Every iteration involves the stages of planning, designing, coding, and testing, resulting in an incrementally improved version of the software.</p>
<p>The model is particularly useful for projects where requirements are not fully understood at the beginning and need to evolve over time.</p>
<h3 id="heading-phases-of-the-iterative-model"><strong>Phases of the Iterative Model</strong></h3>
<ol>
<li><p><strong>Initial Planning:</strong></p>
<ul>
<li><p>Define the high-level goals and scope of the project.</p>
</li>
<li><p>Identify core requirements that need to be addressed in the first iteration.</p>
</li>
</ul>
</li>
<li><p><strong>Design:</strong></p>
<ul>
<li><p>Create a design for the current iteration based on defined requirements.</p>
</li>
<li><p>Focus on modularity to allow for incremental additions.</p>
</li>
</ul>
</li>
<li><p><strong>Implementation:</strong></p>
<ul>
<li>Develop the system for the current iteration, focusing on adding specific features or functionality.</li>
</ul>
</li>
<li><p><strong>Testing:</strong></p>
<ul>
<li><p>Test the software for defects and validate functionality.</p>
</li>
<li><p>Incorporate feedback to refine the product.</p>
</li>
</ul>
</li>
<li><p><strong>Evaluation:</strong></p>
<ul>
<li><p>Present the iteration to users or stakeholders for feedback.</p>
</li>
<li><p>Identify areas for improvement and plan the next iteration.</p>
</li>
</ul>
</li>
<li><p><strong>Repeat:</strong></p>
<ul>
<li>Start the next iteration, incorporating feedback and new requirements.</li>
</ul>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pros</strong></td><td><strong>Cons</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Allows for early testing and risk identification.</td><td>Can lead to scope creep due to continuous iterations.</td></tr>
<tr>
<td>Suitable for projects with unclear or evolving requirements.</td><td>Requires extensive planning and design efforts upfront.</td></tr>
<tr>
<td>Each iteration provides a functional version of the product.</td><td>Costly due to repeated iterations and testing.</td></tr>
<tr>
<td>Incorporates user feedback during development.</td><td>Not ideal for smaller projects with fixed requirements.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-spiral-model"><strong>Spiral Model</strong></h2>
<p>The <strong>Spiral Model</strong> is a risk-driven software development process that combines the iterative nature of the Iterative Model with the systematic aspects of the Waterfall Model. It emphasizes risk assessment and mitigation in each phase, making it particularly suitable for large, complex, or high-risk projects. The model is visualized as a spiral with multiple loops, each representing a phase of development.</p>
<p>Each loop in the spiral includes planning, risk analysis, development, and evaluation, allowing for incremental improvements with a strong focus on identifying and addressing risks.</p>
<h3 id="heading-phases-of-the-spiral-model"><strong>Phases of the Spiral Model</strong></h3>
<p>Each loop in the spiral consists of four main quadrants:</p>
<ol>
<li><p><strong>Planning:</strong></p>
<ul>
<li><p>Identify objectives, constraints, and alternatives for the project.</p>
</li>
<li><p>Deliverable: Initial requirements and plans for the current phase.</p>
</li>
</ul>
</li>
<li><p><strong>Risk Analysis:</strong></p>
<ul>
<li><p>Assess potential risks and develop strategies to mitigate them.</p>
</li>
<li><p>Prototyping is often used to address uncertainties.</p>
</li>
<li><p>Deliverable: Risk management plan or prototype.</p>
</li>
</ul>
</li>
<li><p><strong>Development and Validation:</strong></p>
<ul>
<li><p>Design, code, and test the software for the current iteration.</p>
</li>
<li><p>Deliverable: A functional version of the product (or part of it).</p>
</li>
</ul>
</li>
<li><p><strong>Evaluation:</strong></p>
<ul>
<li><p>Present the iteration to stakeholders for feedback and review.</p>
</li>
<li><p>Decide whether to continue to the next loop, repeat the current loop, or terminate the project.</p>
</li>
<li><p>Deliverable: Updated requirements and feedback.</p>
</li>
</ul>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pros</strong></td><td><strong>Cons</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Combines the advantages of the Waterfall and Iterative models.</td><td>Can be complex to manage and implement.</td></tr>
<tr>
<td>Focuses on risk assessment and mitigation in each iteration.</td><td>Requires extensive expertise in risk analysis.</td></tr>
<tr>
<td>Well-suited for high-risk, large, or complex projects.</td><td>Costly due to multiple iterations and prototyping.</td></tr>
<tr>
<td>Allows for customer feedback in each iteration.</td><td>Not ideal for low-risk or small-scale projects.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-rad-rapid-application-development-model"><strong>RAD (Rapid Application Development) Model</strong></h2>
<p>The <strong>Rapid Application Development (RAD) Model</strong> is a type of software development methodology that prioritizes rapid prototyping and quick delivery over long, detailed planning phases. It emphasizes user involvement and iterative development, allowing developers to quickly create a working prototype and refine it based on user feedback.</p>
<p>The RAD Model is particularly suited for projects where the requirements are well-defined, but the timeline for delivery is tight.</p>
<h3 id="heading-phases-of-the-rad-model"><strong>Phases of the RAD Model</strong></h3>
<p>The RAD Model consists of four main phases:</p>
<ol>
<li><p><strong>Requirement Planning:</strong></p>
<ul>
<li><p>Define high-level business objectives and scope of the project.</p>
</li>
<li><p>Identify core requirements and prioritize features.</p>
</li>
<li><p>Deliverable: Initial project roadmap.</p>
</li>
</ul>
</li>
<li><p><strong>User Design (Prototyping):</strong></p>
<ul>
<li><p>Collaborate with users to create prototypes representing key features.</p>
</li>
<li><p>Users interact with these prototypes to provide feedback.</p>
</li>
<li><p>Deliverable: Functional prototypes for testing and review.</p>
</li>
</ul>
</li>
<li><p><strong>Construction:</strong></p>
<ul>
<li><p>Rapidly develop the software using feedback from the prototyping phase.</p>
</li>
<li><p>Perform unit and integration testing.</p>
</li>
<li><p>Deliverable: Fully functional product increment.</p>
</li>
</ul>
</li>
<li><p><strong>Cutover:</strong></p>
<ul>
<li><p>Deploy the software to the production environment.</p>
</li>
<li><p>Train users, provide documentation, and handle final testing.</p>
</li>
<li><p>Deliverable: Deployed software and user-ready system.</p>
</li>
</ul>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pros</strong></td><td><strong>Cons</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Enables faster development and delivery of prototypes.</td><td>Requires highly skilled developers and designers.</td></tr>
<tr>
<td>Encourages customer involvement and feedback during development.</td><td>Not suitable for large-scale or highly complex projects.</td></tr>
<tr>
<td>Reduces development time and improves flexibility.</td><td>Can lead to reduced quality if rushed.</td></tr>
<tr>
<td>Ideal for projects with tight deadlines and well-defined requirements.</td><td>Depends heavily on strong team collaboration and user feedback.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-v-model-validation-and-verification-model"><strong>V-Model (Validation and Verification Model)</strong></h2>
<p>The <strong>V-Model</strong> (Validation and Verification Model) is a software development methodology that extends the <strong>Waterfall Model</strong> by emphasizing testing at each phase of development. For every development stage on the "left side" of the V, there is a corresponding testing phase on the "right side" of the V. This approach ensures early detection of defects and aligns development activities with validation processes.</p>
<p>It is called the V-Model because the process diagram resembles the letter "V," with development activities descending on the left, meeting at coding, and ascending with validation/testing on the right.</p>
<h3 id="heading-phases-of-the-v-model"><strong>Phases of the V-Model</strong></h3>
<p><strong>Verification Phases (Left Side of the V):</strong></p>
<ol>
<li><p><strong>Requirement Analysis:</strong></p>
<ul>
<li><p>Gather and document user needs and system requirements.</p>
</li>
<li><p>Corresponding Testing Phase: <strong>Acceptance Testing.</strong></p>
</li>
</ul>
</li>
<li><p><strong>System Design:</strong></p>
<ul>
<li><p>Define the overall system architecture and design.</p>
</li>
<li><p>Corresponding Testing Phase: <strong>System Testing.</strong></p>
</li>
</ul>
</li>
<li><p><strong>High-Level Design:</strong></p>
<ul>
<li><p>Break down the system into modules and specify their interactions.</p>
</li>
<li><p>Corresponding Testing Phase: <strong>Integration Testing.</strong></p>
</li>
</ul>
</li>
<li><p><strong>Detailed Design:</strong></p>
<ul>
<li><p>Define the internal logic and structure of each module.</p>
</li>
<li><p>Corresponding Testing Phase: <strong>Unit Testing.</strong></p>
</li>
</ul>
</li>
<li><p><strong>Implementation (Coding):</strong></p>
<ul>
<li><p>Write the code based on the detailed design specifications.</p>
</li>
<li><p>The base of the V marks the completion of development and the start of testing.</p>
</li>
</ul>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Pros</strong></td><td><strong>Cons</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Clear structure with distinct verification and validation phases.</td><td>Rigid and not suitable for evolving requirements.</td></tr>
<tr>
<td>Emphasizes testing at every stage, reducing defects.</td><td>Testing phases can be time-consuming and resource-intensive.</td></tr>
<tr>
<td>Easy to manage due to its linear and sequential nature.</td><td>Requires detailed documentation and planning upfront.</td></tr>
<tr>
<td>Suitable for projects with clear and fixed requirements.</td><td>Not ideal for complex or iterative projects.</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[AWS Parameter Store vs AWS Secrets Manager Comparison and When to Use Each?]]></title><description><![CDATA[Here are two tables comparing AWS Parameter Store and AWS Secrets Manager, and when to use each.
Comparison Table: AWS Parameter Store vs. AWS Secrets Manager




FeatureAWS Parameter StoreAWS Secrets Manager



Primary Use CaseStoring configuration ...]]></description><link>https://devopsdetours.com/aws-parameter-store-vs-aws-secrets-manager-comparison-and-when-to-use-each</link><guid isPermaLink="true">https://devopsdetours.com/aws-parameter-store-vs-aws-secrets-manager-comparison-and-when-to-use-each</guid><category><![CDATA[parameter-store]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS secret manager]]></category><category><![CDATA[pros and cons]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 06 Dec 2024 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733597997385/7486bd70-5d40-4252-924d-24098bbd2364.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here are two tables comparing <strong>AWS Parameter Store</strong> and <strong>AWS Secrets Manager</strong>, and when to use each.</p>
<h3 id="heading-comparison-table-aws-parameter-store-vs-aws-secrets-manager"><strong>Comparison Table: AWS Parameter Store vs. AWS Secrets Manager</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>AWS Parameter Store</strong></td><td><strong>AWS Secrets Manager</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Primary Use Case</strong></td><td>Storing configuration data, non-sensitive parameters</td><td>Managing secrets such as database credentials, API keys</td></tr>
<tr>
<td><strong>Secret Rotation</strong></td><td>Not supported directly</td><td>Built-in support for automatic rotation of secrets</td></tr>
<tr>
<td><strong>Encryption</strong></td><td>Uses AWS KMS (optional)</td><td>Uses AWS KMS for encryption</td></tr>
<tr>
<td><strong>Cost</strong></td><td>Free for basic usage; charged for advanced tier</td><td>Paid service; charges for storage and API calls</td></tr>
<tr>
<td><strong>Integration</strong></td><td>Works with AWS Systems Manager, EC2, Lambda</td><td>Integrates with databases, services requiring secret rotation</td></tr>
<tr>
<td><strong>Versioning</strong></td><td>Supports versioning</td><td>Supports versioning</td></tr>
<tr>
<td><strong>Hierarchy Support</strong></td><td>Hierarchical organization with paths</td><td>No hierarchical structure</td></tr>
<tr>
<td><strong>Audit and Monitoring</strong></td><td>AWS CloudTrail support</td><td>More advanced audit capabilities with CloudTrail</td></tr>
<tr>
<td><strong>SDK/API Support</strong></td><td>Fully supported via AWS SDKs and CLI</td><td>Fully supported via AWS SDKs and CLI</td></tr>
<tr>
<td><strong>Ease of Use</strong></td><td>Simple for configuration storage</td><td>Focused on secret management, with more features for sensitive data</td></tr>
<tr>
<td><strong>Rotation Triggers</strong></td><td>Requires manual implementation</td><td>Automatically triggers Lambda functions for rotation</td></tr>
<tr>
<td><strong>Resource Policies</strong></td><td>Limited to IAM policies</td><td>Fine-grained access control and resource policies</td></tr>
</tbody>
</table>
</div><h3 id="heading-when-to-use"><strong>When to Use:</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Use Case</strong></td><td><strong>AWS Parameter Store</strong></td><td><strong>AWS Secrets Manager</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Storing app configurations</strong></td><td>✅ Ideal for configurations like environment variables</td><td>❌ Not the intended use case</td></tr>
<tr>
<td><strong>Managing secrets like passwords and API keys</strong></td><td>❌ Not designed for sensitive secret management</td><td>✅ Perfect for managing sensitive secrets</td></tr>
<tr>
<td><strong>Automatic secret rotation</strong></td><td>❌ Requires custom implementation</td><td>✅ Built-in support</td></tr>
<tr>
<td><strong>Cost-sensitive projects</strong></td><td>✅ Free for basic usage</td><td>❌ Can be costly for extensive use</td></tr>
<tr>
<td><strong>Hierarchical data storage</strong></td><td>✅ Supports hierarchy with path structures</td><td>❌ Does not support hierarchy</td></tr>
<tr>
<td><strong>Frequent access to secrets</strong></td><td>✅ Suitable for frequently accessed non-sensitive parameters</td><td>✅ Suitable for sensitive data with access tracking</td></tr>
<tr>
<td><strong>Compliance requirements (e.g., PCI-DSS)</strong></td><td>❌ May not meet compliance needs without extra effort</td><td>✅ Tailored for compliance scenarios</td></tr>
<tr>
<td><strong>Integration with existing AWS workflows</strong></td><td>✅ Seamlessly integrates into most AWS services</td><td>✅ Specialized for secret integration</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Your DevOps Reading Roadmap: Books for Every Stage of Your Career]]></title><description><![CDATA[DevOps is a journey of continuous learning, and books are a fantastic way to deepen your knowledge. Whether you're a beginner or advancing to senior roles, this curated reading roadmap—with books and their authors—will guide you through each stage of...]]></description><link>https://devopsdetours.com/your-devops-reading-roadmap-books-for-every-stage-of-your-career</link><guid isPermaLink="true">https://devopsdetours.com/your-devops-reading-roadmap-books-for-every-stage-of-your-career</guid><category><![CDATA[Devops]]></category><category><![CDATA[books]]></category><category><![CDATA[Roadmap]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 06 Dec 2024 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733598462248/78de197a-6c48-4f2d-a1b5-3066a0e02675.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>DevOps is a journey of continuous learning, and books are a fantastic way to deepen your knowledge. Whether you're a beginner or advancing to senior roles, this curated reading roadmap—with books and their authors—will guide you through each stage of your career.</p>
<h3 id="heading-first-6-months-build-your-foundation"><strong>First 6 Months: Build Your Foundation</strong></h3>
<p>Start by understanding the core principles of DevOps, its culture, and the technical basics. These books are perfect for laying a strong foundation:</p>
<ul>
<li><p><strong><em>The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win</em></strong> by Gene Kim, Kevin Behr, and George Spafford</p>
<ul>
<li>Learn the fundamentals of DevOps through an engaging story about overcoming IT challenges.</li>
</ul>
</li>
<li><p><strong><em>Effective DevOps: Building a Culture of Collaboration, Affinity, and Tooling at Scale</em></strong> by Jennifer Davis and Katherine Daniels</p>
<ul>
<li>Discover the importance of culture, collaboration, and trust in successful DevOps teams.</li>
</ul>
</li>
<li><p><strong><em>Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation</em></strong> by Jez Humble and David Farley</p>
<ul>
<li>Master CI/CD pipelines and the technical skills needed to automate testing and deployment.</li>
</ul>
</li>
</ul>
<h3 id="heading-6-12-months-dive-into-intermediate-concepts"><strong>6-12 Months: Dive Into Intermediate Concepts</strong></h3>
<p>Once you're comfortable with the basics, focus on cloud-native tools, infrastructure as code, and scaling DevOps practices:</p>
<ul>
<li><p><strong><em>Cloud Native DevOps with Kubernetes: Building, Deploying, and Scaling Modern Applications in the Cloud</em></strong> by John Arundel and Justin Domingus</p>
<ul>
<li>Get hands-on with Kubernetes and learn how to manage and scale containerized applications.</li>
</ul>
</li>
<li><p><strong><em>Infrastructure as Code: Managing Servers in the Cloud</em></strong> by Kief Morris</p>
<ul>
<li>Learn to treat infrastructure like code, using tools like Terraform and CloudFormation to manage cloud resources.</li>
</ul>
</li>
<li><p><strong><em>The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations</em></strong> by Gene Kim, Patrick Debois, John Willis, and Jez Humble</p>
<ul>
<li>This book provides detailed, actionable steps for applying DevOps principles in real-world environments.</li>
</ul>
</li>
</ul>
<h3 id="heading-1-year-and-beyond-advanced-practices"><strong>1 Year and Beyond: Advanced Practices</strong></h3>
<p>Now it's time to tackle complex challenges like system reliability, scalability, and organizational transformation:</p>
<ul>
<li><p><strong><em>Site Reliability Engineering: How Google Runs Production Systems</em></strong> by Niall Richard Murphy, Betsy Beyer, Chris Jones, and Jennifer Petoff</p>
<ul>
<li>Dive into Google’s approach to reliability, scalability, and incident management.</li>
</ul>
</li>
<li><p><strong><em>Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations</em></strong> by Nicole Forsgren, Jez Humble, and Gene Kim</p>
<ul>
<li>Understand the key metrics that drive high-performing teams and learn how to scale DevOps practices effectively.</li>
</ul>
</li>
<li><p><strong><em>Lean Enterprise: How High-Performance Organizations Innovate at Scale</em></strong> by Jez Humble, Joanne Molesky, and Barry O’Reilly</p>
<ul>
<li>Learn to balance speed and stability while driving innovation and scaling practices in large organizations.</li>
</ul>
</li>
</ul>
<h3 id="heading-specialized-focus-enterprise-level-devops"><strong>Specialized Focus: Enterprise-Level DevOps</strong></h3>
<p>For those working in hybrid IT environments or multi-speed enterprises, these books offer valuable strategies:</p>
<ul>
<li><p><strong><em>The DevOps Adoption Playbook: A Guide to Adopting DevOps in a Multi-Speed IT Enterprise</em></strong> by Sanjeev Sharma</p>
<ul>
<li>Navigate the challenges of adopting DevOps in enterprises with a mix of legacy and modern systems.</li>
</ul>
</li>
<li><p><strong><em>The Unicorn Project: A Novel About Developers, Digital Disruption, and Thriving in the Age of Data</em></strong> by Gene Kim</p>
<ul>
<li>Gain the developer’s perspective on thriving in a world of digital disruption and collaboration challenges.</li>
</ul>
</li>
</ul>
<h3 id="heading-why-this-roadmap-works"><strong>Why This Roadmap Works</strong></h3>
<p>This timeline takes you step by step, from foundational knowledge to advanced expertise, providing practical insights at every stage of your DevOps career. By combining cultural understanding with technical skills, you'll be prepared to thrive in any DevOps environment.</p>
<p>Happy reading and building! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Eliminate Uncertainty in AWS IAM Policies with Policy Simulator]]></title><description><![CDATA[Struggling to troubleshoot access issues or validate permissions in AWS? The AWS IAM Policy Simulator removes the guesswork from managing permissions! This powerful tool lets you simulate and test the impact of policies before deploying them, ensurin...]]></description><link>https://devopsdetours.com/remove-the-guesswork-from-your-aws-iam-policies-with</link><guid isPermaLink="true">https://devopsdetours.com/remove-the-guesswork-from-your-aws-iam-policies-with</guid><category><![CDATA[AWS]]></category><category><![CDATA[IAM]]></category><category><![CDATA[aws policy simulator]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 29 Nov 2024 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738699163853/0738cbf7-f2be-42e8-8047-f49b5a34a90a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Struggling to troubleshoot access issues or validate permissions in AWS? The AWS IAM Policy Simulator removes the guesswork from managing permissions! This powerful tool lets you simulate and test the impact of policies before deploying them, ensuring seamless and secure configurations.</p>
<p>Here’s why it’s a must-use:</p>
<p>✅ Test Safely: Evaluate policies without impacting live environments.</p>
<p>✅ Debug Quickly: Pinpoint why a specific action is allowed or denied.</p>
<p>✅ Boost Security: Fine-tune permissions for the principle of least privilege.</p>
<p>As a Cloud/DevOps Engineer, this tool has been a game-changer in keeping my cloud infrastructure both secure and functional.</p>
<p>Tutorial:</p>
<p><a target="_blank" href="https://lnkd.in/eeRbi5Ku">https://lnkd.in/eeRbi5Ku</a></p>
<p>AWS Policy Simulator Tool:</p>
<p><a target="_blank" href="https://lnkd.in/ePUvSZ3j">https://lnkd.in/ePUvSZ3j</a></p>
<p>Here is what the AWS IAM Policy Simulator looks like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733597570061/91336e0b-5be2-452b-9e4b-2d6a7a701026.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Nginx UI]]></title><description><![CDATA[Yet another Nginx Web UI!
𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬:

Online statistics for server indicators such as CPU usage, memory usage, load average, and disk usage.

Online ChatGPT Assistant

One-click deployment and automatic renewal Let's Encrypt certificates.

Onl...]]></description><link>https://devopsdetours.com/nginx-web-ui</link><guid isPermaLink="true">https://devopsdetours.com/nginx-web-ui</guid><category><![CDATA[nginx]]></category><category><![CDATA[#webui]]></category><category><![CDATA[UI]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 22 Nov 2024 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733599357792/bdb9d227-c088-4495-adc5-09868e84108d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Yet another Nginx Web UI!</p>
<p>𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬:</p>
<ul>
<li><p>Online statistics for server indicators such as CPU usage, memory usage, load average, and disk usage.</p>
</li>
<li><p>Online ChatGPT Assistant</p>
</li>
<li><p>One-click deployment and automatic renewal Let's Encrypt certificates.</p>
</li>
<li><p>Online editing websites configurations with our self-designed NgxConfigEditor which is a user-friendly block editor for nginx configurations or Ace Code Editor which supports highlighting nginx configuration syntax.</p>
</li>
<li><p>Online view Nginx logs</p>
</li>
<li><p>Written in Go and Vue, distribution is a single executable binary.</p>
</li>
<li><p>Automatically test configuration file and reload nginx after saving configuration.</p>
</li>
<li><p>Web Terminal</p>
</li>
<li><p>Dark Mode</p>
</li>
<li><p>Responsive Web Design</p>
</li>
</ul>
<p>Demo:</p>
<p><a target="_blank" href="https://demo.nginxui.com">https://demo.nginxui.com</a></p>
<p>Username：admin</p>
<p>Password：admin</p>
<p>GitHub:</p>
<p><a target="_blank" href="https://github.com/0xJacky/nginx-ui">https://github.com/0xJacky/nginx-ui</a></p>
]]></content:encoded></item><item><title><![CDATA[Troubleshoot ECS Performance Issues with AWS X-Ray 🚀🔧]]></title><description><![CDATA[Is your app feeling sluggish? 📉
A company recently faced this issue with their microservices on Amazon ECS behind an Application Load Balancer (ALB). Certain user requests were dragging down performance. 😟
Time to dig deeper!
The Solution: AWS X-Ra...]]></description><link>https://devopsdetours.com/troubleshoot-ecs-performance-issues-with-aws-x-ray</link><guid isPermaLink="true">https://devopsdetours.com/troubleshoot-ecs-performance-issues-with-aws-x-ray</guid><category><![CDATA[ECS]]></category><category><![CDATA[AWS]]></category><category><![CDATA[troubleshooting]]></category><category><![CDATA[aws x-ray]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 15 Nov 2024 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733597048996/103b2e5a-bfd7-4fd5-9f21-ac1fd773b3da.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Is your app feeling sluggish? 📉</p>
<p>A company recently faced this issue with their microservices on Amazon ECS behind an Application Load Balancer (ALB). Certain user requests were dragging down performance. 😟</p>
<p>Time to dig deeper!</p>
<p>The Solution: AWS X-Ray 🔍</p>
<p>Here's the fix:</p>
<p>1️⃣ Create a Docker image running the 𝐀𝐖𝐒 𝐗-𝐑𝐚𝐲 𝐝𝐚𝐞𝐦𝐨𝐧.</p>
<p>2️⃣ Run it alongside your microservices in ECS.</p>
<p>3️⃣ Use the X-Ray console to analyze request behavior and pinpoint performance bottlenecks. 🎯</p>
<p>X-Ray’s distributed tracing helps you uncover hidden issues and optimize your app across services. ✨</p>
<p>👉 Learn more about running the X-Ray daemon on Amazon ECS:</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html">Running the X-Ray daemon on Amazon ECS - AWS X-Ray</a></p>
]]></content:encoded></item><item><title><![CDATA[Google Gemini wishes for a human to die!!! 💀💀💀]]></title><description><![CDATA[Here you can find the full chat (scroll to the very bottom):
https://gemini.google.com/share/6d141b742a13?]]></description><link>https://devopsdetours.com/google-gemini-wishes-for-a-human-to-die</link><guid isPermaLink="true">https://devopsdetours.com/google-gemini-wishes-for-a-human-to-die</guid><category><![CDATA[death-wish]]></category><category><![CDATA[gemini]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 15 Nov 2024 23:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733599755301/05f4121e-2963-4aa7-94e6-ae3e16b33627.png" alt class="image--center mx-auto" /></p>
<p>Here you can find the full chat (scroll to the very bottom):</p>
<p><a target="_blank" href="https://gemini.google.com/share/6d141b742a13">https://gemini.google.com/share/6d141b742a13</a>?</p>
]]></content:encoded></item><item><title><![CDATA[10 AWS hands-on real-world projects for practice]]></title><description><![CDATA[Build a Serverless Web Application:

Build a Serverless Web Application using Generative AI

Create Continuous Delivery Pipeline:

Create a Continuous Delivery Pipeline on AWS

Create and Connect to a MySQL Database with Amazon RDS:

Create and Conne...]]></description><link>https://devopsdetours.com/10-aws-hands-on-real-world-projects-for-practice</link><guid isPermaLink="true">https://devopsdetours.com/10-aws-hands-on-real-world-projects-for-practice</guid><category><![CDATA[AWS]]></category><category><![CDATA[#Hands-on_Project]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Thu, 31 Oct 2024 23:00:00 GMT</pubDate><content:encoded><![CDATA[<ol>
<li>Build a Serverless Web Application:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-amplify-bedrock-cognito-gen-ai/">Build a Serverless Web Application using Generative AI</a></p>
<ol start="2">
<li>Create Continuous Delivery Pipeline:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/create-continuous-delivery-pipeline/">Create a Continuous Delivery Pipeline on AWS</a></p>
<ol start="3">
<li>Create and Connect to a MySQL Database with Amazon RDS:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/create-mysql-db/">Create and Connect to a MySQL Database with Amazon RDS</a></p>
<ol start="4">
<li>Amazon EC2 Backup and Restore Using AWS Backup:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/amazon-ec2-backup-and-restore-using-aws-backup/">Amazon EC2 Backup and Restore Using AWS Backup</a></p>
<ol start="5">
<li>Batch Upload Files to Amazon S3 Using the AWS CLI:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/backup-to-s3-cli/">How to Use Scripts to Back Up Files to Amazon S3 CLI</a></p>
<ol start="6">
<li>Deploy a Web App on AWS Amplify:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/guides/deploy-webapp-amplify/">Deploy a Web Application on Amazon Amplify | Introduction</a></p>
<ol start="7">
<li>Remotely Run Commands on an EC2 Instance with AWS Systems Manager:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/remotely-run-commands-ec2-instance-systems-manager/">How to Remotely Run Commands on an EC2 Instance with AWS Systems Manager | AWS</a></p>
<ol start="8">
<li>Detect, Analyze, and Compare Faces with Amazon Rekognition:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/detect-analyze-compare-faces-rekognition/">Detect, Analyze, and Compare Faces with Amazon Rekognition</a></p>
<ol start="9">
<li>Create an Audio Transcript with Amazon Transcribe:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/create-audio-transcript-transcribe/">How to create an audio transcript with Amazon Transcribe | AWS</a></p>
<ol start="10">
<li>Analyze insights in text with Amazon Comprehend:</li>
</ol>
<p><a target="_blank" href="https://aws.amazon.com/getting-started/hands-on/analyze-sentiment-comprehend/">https://aws.amazon.com/getting-started/hands-on/analyze-sentiment-comprehend/</a></p>
]]></content:encoded></item><item><title><![CDATA[Struggling with DynamoDB Capacity Planning? Here's How to Handle Unpredictable Traffic Like a Pro]]></title><description><![CDATA[Estimating the exact number of reads and writes that your customers will generate can be really challenging in a real-world scenario because website traffic is often unpredictable and can fluctuate throughout the day. Let's walk through how you can a...]]></description><link>https://devopsdetours.com/dynamodb-capacity-planning-like-a-pro</link><guid isPermaLink="true">https://devopsdetours.com/dynamodb-capacity-planning-like-a-pro</guid><category><![CDATA[AWS]]></category><category><![CDATA[DynamoDB]]></category><category><![CDATA[CapacityPlanning]]></category><category><![CDATA[Capacity]]></category><category><![CDATA[Databases]]></category><category><![CDATA[NoSQL]]></category><dc:creator><![CDATA[Shahin Hemmati]]></dc:creator><pubDate>Fri, 25 Oct 2024 22:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733583636596/6c663445-844e-4274-a5e1-ad9f0f425fe1.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Estimating the exact number of reads and writes that your customers will generate can be really challenging in a real-world scenario because website traffic is often <strong>unpredictable</strong> and can <strong>fluctuate</strong> throughout the day. Let's walk through how you can approach this practically, especially when using Amazon DynamoDB.</p>
<h3 id="heading-understanding-the-unpredictability-of-real-world-traffic">Understanding the Unpredictability of Real-World Traffic</h3>
<p>In a real-world scenario, it is often impossible to predict exactly how many visitors your site will have or how many reads and writes they will generate. Traffic could spike unexpectedly (e.g., due to a sale or a social media promotion), and trying to predict that traffic precisely can be like guessing the weather weeks in advance.</p>
<h3 id="heading-1-estimating-capacity-with-assumptions">1. Estimating Capacity with Assumptions</h3>
<p>To help you plan your read and write capacity, you can <strong>make some initial assumptions</strong> based on your website's expected usage. Here’s how you can think about it:</p>
<h3 id="heading-example-scenario-online-product-store">Example Scenario: Online Product Store</h3>
<p>Let's say you are building an <strong>e-commerce site</strong> and using DynamoDB to store information about <strong>products</strong> and <strong>customer orders</strong>. You need to think about both <strong>reads</strong> and <strong>writes</strong> to estimate the capacity:</p>
<ul>
<li><p><strong>Reads</strong>: Customers browsing products are generating <strong>read requests</strong>.</p>
</li>
<li><p><strong>Writes</strong>: When a customer places an order, or a new product is added, that is a <strong>write request</strong>.</p>
</li>
</ul>
<h3 id="heading-estimating-reads">Estimating Reads:</h3>
<ul>
<li><p>You estimate that there will be <strong>1,000 unique visitors</strong> to your website per day.</p>
</li>
<li><p>On average, each visitor <strong>views 10 product pages</strong>. This means you would have:1,000 visitors * 10 reads per visitor = 10,000 reads per day.</p>
</li>
</ul>
<p>Now let’s estimate <strong>reads per second</strong>:</p>
<ul>
<li><p>Divide 10,000 reads per day by the number of seconds in a day (86400 seconds).</p>
</li>
<li><p>10,000 / 86,400 ≈ 0.12 reads per second. You can round this up to <strong>1 read per second</strong> to be safe.</p>
</li>
</ul>
<h3 id="heading-estimating-writes">Estimating Writes:</h3>
<ul>
<li><p>Assume that <strong>10% of visitors</strong> place an order.</p>
</li>
<li><p>This means you would have:1,000 visitors * 10% = 100 orders per day.</p>
</li>
</ul>
<p>Now let’s estimate <strong>writes per second</strong>:</p>
<ul>
<li><p>Divide 100 writes per day by the number of seconds in a day (86400 seconds).</p>
</li>
<li><p>100 / 86,400 ≈ 0.001 writes per second. Again, round up to <strong>1 write per second</strong> for safety.</p>
</li>
</ul>
<p>Based on this, you could initially configure your DynamoDB table for:</p>
<ul>
<li><p><strong>1 Read Capacity Unit (RCU)</strong> per second.</p>
</li>
<li><p><strong>1 Write Capacity Unit (WCU)</strong> per second.</p>
</li>
</ul>
<h3 id="heading-2-capacity-modes-in-dynamodb">2. Capacity Modes in DynamoDB</h3>
<p>Instead of having to estimate exactly, DynamoDB offers two <strong>capacity modes</strong> that are helpful in dealing with unpredictable workloads:</p>
<p><strong>Provisioned Capacity Mode</strong>:</p>
<ul>
<li><p>This is where you set a fixed number of <strong>RCUs</strong> and <strong>WCUs</strong> based on your estimates, as described above.</p>
</li>
<li><p><strong>Pros</strong>: If you have relatively predictable traffic, it allows you to <strong>control costs</strong> by paying only for what you need.</p>
</li>
<li><p><strong>Cons</strong>: If your traffic spikes suddenly and goes beyond your provisioned capacity, you may experience <strong>throttling</strong> (i.e., some requests may fail).</p>
</li>
</ul>
<p><strong>On-Demand Capacity Mode</strong>:</p>
<ul>
<li><p><strong>On-Demand Mode</strong> means you don't need to specify the number of RCUs or WCUs upfront. DynamoDB will <strong>automatically handle scaling</strong> based on incoming traffic.</p>
</li>
<li><p><strong>Pros</strong>: This is ideal for unpredictable workloads where you might get <strong>spikes in traffic</strong>.</p>
</li>
<li><p><strong>Cons</strong>: It's more <strong>expensive</strong> per operation compared to provisioned capacity, but it prevents throttling during spikes.</p>
</li>
</ul>
<h3 id="heading-real-world-strategies-for-handling-capacity">Real-World Strategies for Handling Capacity</h3>
<p>Since predicting exact traffic is challenging, many companies use a combination of the following:</p>
<ol>
<li><strong>Start with On-Demand Mode:</strong></li>
</ol>
<ul>
<li><p>If your workload is unpredictable, it’s better to start with on-demand capacity mode. This ensures that DynamoDB can scale automatically to handle whatever traffic comes to your website, and you won’t have to deal with capacity planning initially.</p>
</li>
<li><p>This is useful during initial launches, product promotions, or campaigns when it's unclear how much traffic to expect.</p>
</li>
</ul>
<p><strong>2. Monitor Usage Patterns:</strong></p>
<ul>
<li><p>Use AWS CloudWatch to monitor the number of reads and writes on your DynamoDB table.</p>
</li>
<li><p>Once you have a better understanding of your traffic patterns (e.g., after a few weeks or months), you could potentially switch to provisioned capacity if your traffic becomes more predictable, which helps lower your costs.</p>
</li>
</ul>
<p><strong>3. DynamoDB Autoscaling (Provisioned Capacity):</strong></p>
<ul>
<li><p>If you choose provisioned capacity mode, you can also enable autoscaling. This allows DynamoDB to automatically adjust the RCUs and WCUs up or down based on the traffic.</p>
</li>
<li><p>You set a minimum and maximum threshold so that capacity scales within a defined range, avoiding both under-provisioning (which causes throttling) and over-provisioning (which increases cost).</p>
</li>
</ul>
<h3 id="heading-summary">Summary</h3>
<p><strong>Predicting Traffic</strong>: It’s nearly impossible to predict exact traffic in the real world, especially at launch. You can start by estimating based on assumptions, but this is just an approximation.</p>
<p><strong>Capacity Planning Modes</strong>:</p>
<ul>
<li><p>Use <strong>on-demand capacity</strong> if you have <strong>unpredictable traffic</strong> or expect spikes.</p>
</li>
<li><p>Use <strong>provisioned capacity</strong> if your workload is predictable, and consider <strong>autoscaling</strong> to adjust as needed.</p>
</li>
</ul>
<p><strong>Real-World Approach</strong>: Many people start with <strong>on-demand capacity</strong> to avoid worrying about underestimating traffic. Once they understand their traffic patterns and the average number of reads and writes, they may switch to <strong>provisioned capacity</strong> with autoscaling to optimize for cost.</p>
<p>In simple terms, managing read and write capacity units in DynamoDB is all about making sure your table can handle your expected traffic <strong>without throttling</strong>, and the key is to <strong>balance cost with availability</strong> by choosing the right capacity mode based on your application's needs.</p>
]]></content:encoded></item></channel></rss>