<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Stevens Tech Corner]]></title><description><![CDATA[Cloud Tech and Automation;]]></description><link>https://stevenrhodes.us/</link><generator>Ghost 4.2</generator><lastBuildDate>Sat, 19 Oct 2024 23:55:24 GMT</lastBuildDate><atom:link href="https://stevenrhodes.us/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[List the IP Address of Azure Load Balancers]]></title><description><![CDATA[<p>Today I was tasked with inventorying assets and defining if they were involved with handling ePHI (electronic Personal Health Information). The challenge I was present was being given only the IP Address of the Load Balancer and the default name created by Azure - &quot;kubernetes&quot;. Anyone working with</p>]]></description><link>https://stevenrhodes.us/list-the-ip-address-of-azure-load-balancers/</link><guid isPermaLink="false">6217ef63773e290001364360</guid><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Steven Rhodes]]></dc:creator><pubDate>Thu, 24 Feb 2022 21:12:27 GMT</pubDate><content:encoded><![CDATA[<p>Today I was tasked with inventorying assets and defining if they were involved with handling ePHI (electronic Personal Health Information). The challenge I was present was being given only the IP Address of the Load Balancer and the default name created by Azure - &quot;kubernetes&quot;. Anyone working with large scale infrastructure and deployments can quickly realize looking at a list of 100 IP addresses quickly becomes a monotonous task to identify each one individually, especially in Azure. </p><p>After painfully going through the first Load Balancer I found it necessary to convert the steps below into a one liner.<br><br>List all Load Balancers in the Subscription<br>$ az network lb list<br><br>List all frontend IP Objects in the Load Balancer<br>$ az network lb frontend-ip list --lb-name kubernetes --resource-group &lt;your resource group&gt;<br><br>Show the AzureRM Resource ID for the IP object.<br>$ az network lb frontend-ip show --name &lt;object name&gt; --lb-name kubernetes --resource-group &lt;resource group name&gt; --output json --query publicIpAddress.id<br><br>Show the actual IP Address<br>$ az network public-ip show --id &lt;azurerm resource id&gt; --query ipAddress</p><p>So you can see in the list above it takes 4 separate commands to get to the IP Address. You can shortcut that list if you know the specific objects in the detailed output. So now below you will see a shorter version of the 4 steps above. The output of the command below will be the Azure Resource ID and the IP Address. I find the Resource ID plentiful for the environments I maintain as the Resource Group name tells me Region, Product and Environment each resource lives in.1 (Example ue-xyz-dev-&lt;extra details&gt;. Where &quot;xyz&quot; is the product abbreviation). You may need to alter this to be more detailed for your needs.</p><p>$ for id in $(az network lb list --output json|jq -r &apos;.[].frontendIpConfigurations[].publicIpAddress.id&apos;);do print &quot;\n${id}&quot;; az network public-ip show --output tsv --id ${id} --query &apos;ipAddress&apos;; done</p><p>Hopefully this will save you some time looking up resources in your Azure environment(s).</p>]]></content:encoded></item><item><title><![CDATA[Import data into MySQL table with Python and MySQL]]></title><description><![CDATA[<p>This script I created as a simple means to step through the files found in a<br>directory and import them into a MySQL Table via a cron job. The files are<br>placed into a specific directory by another task. Each file was compressed via<br>tar with gzip and encrypted with</p>]]></description><link>https://stevenrhodes.us/import-data-into-mysql-table-with-python-and-mysql/</link><guid isPermaLink="false">607bcf888667b800016f111a</guid><category><![CDATA[Python]]></category><category><![CDATA[MySQL]]></category><dc:creator><![CDATA[Steven Rhodes]]></dc:creator><pubDate>Sun, 18 Apr 2021 06:20:37 GMT</pubDate><content:encoded><![CDATA[<p>This script I created as a simple means to step through the files found in a<br>directory and import them into a MySQL Table via a cron job. The files are<br>placed into a specific directory by another task. Each file was compressed via<br>tar with gzip and encrypted with gpg. So, we will need to unecrypt each file and<br>extract the contents so we may import the sql data.</p><p>You will need to have installed and configured on your system:</p><ul><li>Python</li><li>mysql-client</li><li>gpg [<a href="https://www.gnupg.org/download/">https://www.gnupg.org/download/</a>]</li></ul><p>The code:</p><pre><code class="language-python3">#!/usr/bin/python3

import os
import sys
from os import walk

gpgKey = &apos;%%passphrase%%&apos;
thisTable = &apos;%%table_name%%&apos;

filesPath = &apos;/data/incoming/&apos;
files = []
for ( dirpath, dirnames, filenames ) in walk( filesPath ):
    files.extend( filenames )
    break

for file in files:

    fileWithPath = filesPath + file

    # drop table if exist
    tbl = file.replace( &apos;.sql.gz.gpg&apos;, &apos;&apos; )
    cmd = &amp;quot;mysql &amp;quot; + thisTable + &amp;quot; -e &apos;DROP TABLE `&amp;quot; + tbl + &amp;quot;`&apos;&amp;quot;
    os.system( cmd )

    # extract encrypted compressed file into temp dir
    cmd = &amp;quot;gpg --decrypt --passphrase \&apos;&amp;quot; + gpgKey + &amp;quot;\&apos; --no-use-agent --cipher-algo AES256 &amp;quot; + fileWithPath + &amp;quot; | gunzip &amp;gt; /data/tmp/working.sql&amp;quot;
    os.system( cmd )

    # import data into MySQL database
    cmd = &amp;quot;mysql &amp;quot; + thisTable + &amp;quot; &amp;lt; /data/tmp/working.sql&amp;quot;
    os.system( cmd )

    # remove temp file
    os.remove( &apos;/data/tmp/working.sql&apos; )

    # remove uploaded file
    os.remove( fileWithPath )
</code></pre>]]></content:encoded></item><item><title><![CDATA[Ansible in a Container]]></title><description><![CDATA[<p>Performing the tasks below should take less than 5 minutes to complete. There<br>are only 3 actual tasks to perform. &#xA0;Extra information is provided along the way<br>for different levels of experience.</p><p>This document expects that you have some basic understanding of Linux and<br>command line experience.</p><p>First you</p>]]></description><link>https://stevenrhodes.us/ansible-in-a-container/</link><guid isPermaLink="false">607bce8f8667b800016f1113</guid><category><![CDATA[Ansible]]></category><category><![CDATA[Container]]></category><dc:creator><![CDATA[Steven Rhodes]]></dc:creator><pubDate>Sun, 18 Apr 2021 06:18:03 GMT</pubDate><content:encoded><![CDATA[<p>Performing the tasks below should take less than 5 minutes to complete. There<br>are only 3 actual tasks to perform. &#xA0;Extra information is provided along the way<br>for different levels of experience.</p><p>This document expects that you have some basic understanding of Linux and<br>command line experience.</p><p>First you need to clone this repo. With the following command you can place this<br>repo in a directory of your choice. Note the trailing period which means to<br>clone into the current directory. Removing the trailing period will clone this<br>into its own directory within the one you have changed to in the cd &#xA0;command.</p><pre><code class="language-bash">cd to/your/directory/of/choice
git clone git@github.com:k7faq/ansible-container.git .
</code></pre><p>EXAMPLES<br>Both of these examples accomplish the same task.</p><pre><code class="language-bash">mkdir /user/home/myuser/containers/; cd $_
git clone git@github.com:k7faq/ansible-container.git .
</code></pre><p>OR</p><pre><code class="language-bash">mkdir /user/home/myuser/containers/
cd /user/home/myuser/containers/
git clone git@github.com:k7faq/ansible-container.git .
</code></pre><p>Then you need to build the container using the following command FROM WITHIN the<br>directory you cloned your repo into. You need to build the container. A shell<br>script is provided:</p><p><code>./build</code></p><p>You now have a container built named ansible_2.7.</p><p>During the build process above an alias was created on your behalf. Provided<br>your Linux distro uses a .bash_profile &#xA0;in your home directory to store custom<br>settings you should be good to go using the following commands:</p><p>If all has succeded then you should be greeted with success reports like this:</p><pre><code>PLAY RECAP *****************************************************************

localhost                  : ok=1    changed=1    unreachable=0    failed=0   
</code></pre><p>How do I use this?<br>This container configuration allows the user to specify the command they wish to<br>execute against the installed and configuration application(s). Such as Ansible<br>offers several executable commands, including ansible &#xA0;and ansible-playbook,<br>etc. Following are two examples of how to use this container:</p><pre><code class="language-bash">ansi ansible-playbook configure_env.yml -vvvv


ansi ansible --version
</code></pre><p>Why does this container not have an ENTRYPONT or CMD?<br>Ansible offers more than just one command (functional feature). Not specifying<br>either of these features allows me to specify which command I want to execute<br>without needing multiple containers. Yes, there may be other options, this is<br>just that I have chosen.</p><p>Some notes of interest<br>If you take time to look at the contents of the run command you will find that<br>your .ssh/ &#xA0;directory in your user home directory is passed to the container.<br>Why? Simply because of how Ansible works using ssh keys to connect to resources.<br>In my case I only use a username and password during the initial startup of a<br>VM. This initial step creates the user id and establishes the appropriate key(s)<br>for future connections. Thus each subsequent connection will utilize the key(s)<br>in my user home directory -OR- from the location specified.</p><p>HELP! All has failed<br>If the above failed, please look in your ~/.bash_profile &#xA0;file to verify an<br>alias of ansi &#xA0;was created for you. If you find one then please log out and log<br>back into your terminal window and try again. If still no success then try one<br>of the following:</p><p>Create a custom shell script * This will need to be in your $PATH</p><ul><li>use echo $PATH &#xA0;to verify your options</li><li>typically /usr/local/bin/ &#xA0;is a great place for these</li></ul><p>Create a sym(bolic) link named ansi</p><pre><code class="language-bash">echo `sudo ln -sf ${PWD}/run /usr/local/bin/ansi
ll /usr/local/bin/ansi
</code></pre><ul><li>if not using Ubuntu (or a Debian product) * verify which file is used by the<br>OS to establish your users environment at login</li></ul>]]></content:encoded></item><item><title><![CDATA[Cloud Custodian in a Container]]></title><description><![CDATA[<p>Using Capitol One Cloud Custodian in a container<br>Continuing with my desire to containerize as many services as possible ... I<br>created a container profile for Cloud Custodian<br>[<a href="https://github.com/cloud-custodian">https://github.com/cloud-custodian</a>] &#xA0;(c7n) and c7n-org that I use successfully<br>every day. Both on my local machine and on deployed web</p>]]></description><link>https://stevenrhodes.us/cloud-custodian-in-a-container/</link><guid isPermaLink="false">607bcdd78667b800016f110d</guid><category><![CDATA[Cloud Custodian]]></category><category><![CDATA[Container]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Steven Rhodes]]></dc:creator><pubDate>Sun, 18 Apr 2021 06:14:20 GMT</pubDate><content:encoded><![CDATA[<p>Using Capitol One Cloud Custodian in a container<br>Continuing with my desire to containerize as many services as possible ... I<br>created a container profile for Cloud Custodian<br>[<a href="https://github.com/cloud-custodian">https://github.com/cloud-custodian</a>] &#xA0;(c7n) and c7n-org that I use successfully<br>every day. Both on my local machine and on deployed web servers.</p><p>Permissions<br>As with any other AWS Service you will need to declare permissions. You have a<br>choice of asserting Access Keys on your machine or leveraging the power of Roles<br>and thus limiting the risks associated with unauthorized access and usage.</p><p>On your controlling machine you need to set some AWS permissions. Custodian will<br>by default look for the default AWS Credentials configurations. Thus creating<br>the following files and configuring them accordingly.</p><pre><code class="language-bash">~/.aws/credentials

[sandbox-playground]
arn:aws:iam::#YourAccount#:role/custodianAccessRole
source_profile=default
region=us-east-1


~/.aws/config

[default]
region=us-east-1
</code></pre><p>Example usage:</p><p><code>custodian run --profile sandbox-playground --output-dir . --verbose asg.yml</code></p><p>Let&apos;s look at some code<br>Container:</p><pre><code># Dockerfile 
FROM python:3.7-alpine

RUN pip install c7n c7n-org \
    &amp;amp;&amp;amp; mkdir -p /scripts/
</code></pre><p>Note the absence of ENTRYPOINT &#xA0;and CMD &#xA0;in this Dockerfile. This allows the<br>passing of commands making the usage more general. This will be better<br>understood as we get to the Usage section below.</p><pre><code class="language-bash">Build script
#!/bin/bash 
docker build -t=&amp;quot;custodian&amp;quot; .

## NOTE ## Make sure to update the path (/sr/custodian/run) to the full path of the directory in which you place these files. 
## The following will create a SymLink (Symbolic Link) to the run file saving you the hassle of referencing the full path 
## each time you execute the command.
ln -sf /srv/custodian/run /usr/local/bin/custodian
</code></pre><p>Run Script<br>The &quot;$@&quot; appends ALL the verbiage following the $ custodian &#xA0;command. Reference<br>the Usage section below for examples.</p><p>docker run -v &quot;${PWD}:/policies&quot; -v &quot;${HOME}/.aws/:/root/.aws/&quot; -v &quot;/tmp/custodian:/tmp&quot; -e AWS_PROFILE -it --rm custodian &quot;$@&quot;</p><p>Yes, it would be more readable as below. However I have experienced too many<br>issues of it not being interpreted properly and use the above without issues.<br>Keeping in mind that I cross multiple platforms daily.</p><pre><code class="language-bash">docker run \
  -v &amp;quot;${PWD}:/policies&amp;quot; \
  -v &amp;quot;${HOME}/.aws/:/root/.aws/&amp;quot; \
  -v &amp;quot;/tmp/custodian:/tmp&amp;quot; \
  -e AWS_PROFILE \
  -it --rm \
  custodian &amp;quot;$@&amp;quot;
</code></pre><pre><code class="language-bash">Usage
$ custodian run --region us-east-1 --output-dir . --verbose custodian_aws_asg.yml

# NOTE the usage of the alias c7n
$ c7n custodian_aws_asg.yml

# NOTE the usage of the alias c7n-org
$ c7n-org run --config accounts.yml --output-dir . --tags path:/myEnv/Sandbox --dryrun --use custodian_aws_asg.yml
</code></pre><p>Aliases that I use<br>I use many aliases on my machines. These are meant to reduce the amount of<br>typing and increase efficiency not having to remember every key switch or the<br>unintended typos costing time to correct, etc. Here is an example alias as shown<br>in the Usage section above.</p><pre><code class="language-bash">alias c7n=&apos;custodian run --output-dir=/tmp &apos;
alias c7n-org=&apos;custodian c7n-org&apos;
</code></pre><pre><code class="language-bash">Example AWS User Data
#!/bin/bash
apt-get update -y --fix-missing
apt-get install -y unzip python python-pip python3 python3-pip
pip3 install awscli --upgrade

# Install Docker
curl -sSL https://get.docker.com/ | sh
systemctl enable docker; systemctl start docker

aws s3 cp s3://%% YOUR BUCKET %%/custodian_Container.zip /tmp/custodian_Container.zip
mkdir -p /srv/custodian/
unzip /tmp/custodian_Container.zip -d /srv/custodian/
mkdir -p /root/.aws/
mv /srv/custodian/aws/* /root/.aws/
cd /srv/custodian/ &amp;amp;&amp;amp; chmod +x run &amp;amp;&amp;amp; sh build
ln -s /srv/custodian/run /usr/local/bin/custodian
</code></pre>]]></content:encoded></item><item><title><![CDATA[Protecting your S3 data]]></title><description><![CDATA[<p>How to protect your precious S3 data from being deleted<br>Recently I worked on creating Cloud Custodian policies to maintain a Sandbox<br>environment. In the course of development and testing an unexpected action<br>occurred resulting in the deletion of multiple S3 Buckets. Here I will discuss<br>ways to protect this</p>]]></description><link>https://stevenrhodes.us/protecting-your-s3-data/</link><guid isPermaLink="false">607bcc958667b800016f1103</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><dc:creator><![CDATA[Steven Rhodes]]></dc:creator><pubDate>Sun, 18 Apr 2021 06:09:33 GMT</pubDate><content:encoded><![CDATA[<p>How to protect your precious S3 data from being deleted<br>Recently I worked on creating Cloud Custodian policies to maintain a Sandbox<br>environment. In the course of development and testing an unexpected action<br>occurred resulting in the deletion of multiple S3 Buckets. Here I will discuss<br>ways to protect this data and prevent accidental deletion of files.</p><p>Use MFA (Multi-Factor Authentication) or Versioning<br>This AWS Document<br>[<a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html">https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html</a>] &#xA0;discusses<br>this topic in more detail. One of my coworkers reported having encountered<br>extreme difficulty in trying to delete files from a bucket and the bucket itself<br>after affecting the MFA protection. While MFA will protect your bucket from<br>deletion the other option is Versioning. Versioning will allow you to store<br>multiple versions of your files.</p><p>An easy way to protect your bucket with the MFA Delete protection and/or<br>Versioning is via Terraform. Using Terraform you can apply versioning to your<br>bucket like this example: (Note the versioning).<br>In this example we are enabling versioning by using the enabled = true<br>statement and forcing MFA Delete protection with the mfa_delete = true<br>statement.</p><pre><code>resource &amp;quot;aws_s3_bucket&amp;quot; &amp;quot;b&amp;quot; {
  bucket = &amp;quot;my-tf-test-bucket&amp;quot;
  acl    = &amp;quot;private&amp;quot;

  versioning {
    enabled = true
    mfa_delete = true
  }
}
</code></pre><p>I will include other IaC examples as time permits.<br>Future enhancements:</p><ul><li>Ansible</li><li>Others?</li></ul>]]></content:encoded></item><item><title><![CDATA[CI/CD via GitLab and a Runner in Containers?]]></title><description><![CDATA[<p>What I learned about configuring a GitLab Server with a Runner using Docker<br>Containers<br>It is important to note &#xA0;this document is for Proof of Concept testing. IT IS<br>NOT &#xA0;meant as a go to for Production environments as the use of SSL (HTTPS) is<br>not spoken of</p>]]></description><link>https://stevenrhodes.us/ci-cd-via-gitlab-and-a-runner-in-containers/</link><guid isPermaLink="false">607bcbe68667b800016f10f5</guid><category><![CDATA[GitLab]]></category><category><![CDATA[Container]]></category><dc:creator><![CDATA[Steven Rhodes]]></dc:creator><pubDate>Sun, 18 Apr 2021 06:06:41 GMT</pubDate><content:encoded><![CDATA[<p>What I learned about configuring a GitLab Server with a Runner using Docker<br>Containers<br>It is important to note &#xA0;this document is for Proof of Concept testing. IT IS<br>NOT &#xA0;meant as a go to for Production environments as the use of SSL (HTTPS) is<br>not spoken of in this article.</p><p>Containers<br>I fully support Containers and containerization of services. I agree with the<br>portability viewpoint, but for me it is more of keeping a &quot;clean&quot; system. The<br>fewer services installed the fewer conflicts to work out and the fewer updates I<br>must worry about conflicting with installed services and so on. Next there is<br>the issue of bundled configurations minimizing conflicts of packages and updates<br>allowing less of an oppotunity of downtime while forced to troubleshoot a<br>package update. Although building containers on the fly does not protect you<br>from this. To truly appreciate this means of minimizing conflicts it would be<br>best to consider hosting your own Docker Hub or publish your containers<br>publicly.</p><p>What I tried<br>Initially I was trying to run a Container on my notebook with a Runner in an AWS<br>EC2. This will NOT work as the runner must be able to communicate with the<br>GitLab Server via an IP or FQDN. Next I tried running both the GitLab CE Server<br>and a Runner in the same EC2. In doing so I found a t2.micro is insufficient for<br>this purpose, although a t2.large is capbable of handling a small learning<br>configuration.</p><p>As I support IaC (Infrastructure as Code)<br>I built a Terraform Module that creates the AWS Environment and necessary<br>resources:<br>Terraform code used to create the AWS Environment is available here: GitHub<br>[URL].</p><p>Once you have your VM running with Docker installed, this User Data will<br>establish your GitLab-CE Container. Once completed you can access your new<br>GitLab-CE environment via http://your_domain.tld/. Be sure to append the URL<br>with your custom port number if you elect to use a port other than 80.</p><p>#!/bin/bash</p><p>apt-get update -y<br>apt-get install -y awscli unzip</p><h1 id="install-docker">Install Docker</h1><p>curl -sSL <a href="https://get.docker.com/">https://get.docker.com/</a> | sh<br>systemctl enable docker; systemctl start docker</p><p>######### GITLAB SERVER<br>HOSTNAME=$(curl <a href="http://169.254.169.254/latest/meta-data/public-hostname">http://169.254.169.254/latest/meta-data/public-hostname</a>)<br>docker run --detach <br>--hostname &quot;$HOSTNAME&quot; <br>--publish 80:80 --publish 2289:22 <br>--name gitlab <br>--restart always <br>--volume ~/Library/Docker/gitlab/config:/etc/gitlab <br>--volume ~/Library/Docker/gitlab/logs:/var/log/gitlab <br>--volume ~/Library/Docker/gitlab/data:/var/opt/gitlab <br>gitlab/gitlab-ce:latest</p><p>######### GITLAB RUNNER<br>mkdir -p /srv/gitlab-runner/config<br>docker run -d --name gitlab-runner --restart always <br>-v /srv/gitlab-runner/config:/etc/gitlab-runner <br>-v /var/run/docker.sock:/var/run/docker.sock <br>gitlab/gitlab-runner:latest</p><p>This next step is not automated as I did not take time to figure out if an API<br>exists to extract this info, et al.<br>Following the deployment of the EC2, you will need to associate the Runner with<br>the GitLab Server:</p><p>HOSTNAME=$(curl <a href="http://169.254.169.254/latest/meta-data/public-hostname">http://169.254.169.254/latest/meta-data/public-hostname</a>)<br>TOKEN=%% get from your gitlab server %%<br>docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register <br>--non-interactive <br>--executor &quot;docker&quot; <br>--docker-image ubuntu:18.04 <br>--url &quot;%% URL TO YOUR SERVER%%&quot; <br>--registration-token &quot;${TOKEN}&quot; <br>--description &quot;%% YOUR RUNNER DESCRIPTION%%&quot; <br>--tag-list &quot;test,AWS&quot; <br>--run-untagged <br>--locked=&quot;false&quot;</p><ol><li>Be sure to set your --executor &#xA0;accordingly docs<br>[<a href="https://docs.gitlab.com/runner/executors/README.html">https://docs.gitlab.com/runner/executors/README.html</a>]. I chose Docker.</li><li>Set the --docker-image &#xA0;you will use each time your runner executes. Be sure<br>to plan this accordingly. If you chose to use Docker for your Runners and<br>you set Alpine here as the distro you will need to make sure your runners<br>are configured to use Alpine packages, etc.</li><li>Set the --url &#xA0;to point to your server. E.g. <a href="http://gitlab.mydomain.com">http://gitlab.mydomain.com</a>.<br>This is documented in your GitLab-CE Server. Login to your GitLab-CE server.<br>On the lower left click: Settings | CI/CD | Runners. Scroll down &quot;Specific<br>Runners&quot; and you will find the URL.</li><li>Set the --registration-token. You will need to access your GitLab-CE Server<br>to obtain this. Login to your GitLab-CE server. On the lower left click:<br>Settings | CI/CD | Runners. Scroll down &quot;Specific Runners&quot; and you will find<br>the registration token shown below the URL.</li><li>Set the --description &#xA0;to something meaningful. This will help you<br>distinguish this runner from your other runners as you create them in the<br>future.</li><li>Set the --tag-list &#xA0;with tags that will be used to kick off this runner<br>process.</li></ol>]]></content:encoded></item><item><title><![CDATA[How to Download Logs from a Kubernetes Node for investigation]]></title><description><![CDATA[<p>Recently I encountered an issue with a Kubernetes Cluster not routing to the Internet properly. It was first discovered by the Product Team during the execution of a Production Release. Pods were repeating the CrashLoopBackoff cycle. The logs revealed one of the containers was unable to establish a connection to</p>]]></description><link>https://stevenrhodes.us/how-to-download-logs-from-a-node-for-investigation/</link><guid isPermaLink="false">607bae988667b800016f1094</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Steven Rhodes]]></dc:creator><pubDate>Sun, 18 Apr 2021 04:02:56 GMT</pubDate><content:encoded><![CDATA[<p>Recently I encountered an issue with a Kubernetes Cluster not routing to the Internet properly. It was first discovered by the Product Team during the execution of a Production Release. Pods were repeating the CrashLoopBackoff cycle. The logs revealed one of the containers was unable to establish a connection to Azure Service Bus. This resulted in a 10 hour marathon working with Azure Support to identify and resolve the issue. Thankfully we were able to initiate our Business Continuity Disaster Recovery Plan and rolled all customer workloads to an alternate Region.</p><p>During this event Microsof Azure Rapid Response Technical Support requested copies of IP Tables dumps etc to allow them to investigate. For experienced Linux Administrators this would seem simple enough - <code>ssh admin@cluster -c &apos;iptables -L&apos; &gt; iptables.out</code>, right? Not so quick on a Managed Kubernetes Cluster. One of seveal solutions to this problem is to use <a href="https://github.com/kvaps/kubectl-node-shell">Node Shell</a> to access the Kubernetes node.</p><p><strong>The following demonstration is based on Azure Kubernetes Service. The <code>kubectl</code> commands are universal to any Kubernetes Cluster.</strong></p><p>List the clusters in the Subscription.</p><pre><code class="language-bash"># List the Clusters
az aks list
</code></pre><p>Obtain the credentials for the Cluster</p><pre><code class="language-bash"># Get the Credentials
az aks get-cedentials --name &lt;name of the cluster&gt; --resource-group &lt;resource group name&gt;
</code></pre><p>List the nodes</p><pre><code class="language-bash">kubect get nodes --output wide
</code></pre><p>Enter the Node. Essentially we are SSHing into the node via the Node Shell. As this demo is based on a &quot;managed&quot; deployment, I do not have the ability to directly SSH into the node. Albeit, yes - I could have created a pod with the proper private SSH key to accomplish the same, but in the interest of time ...</p><pre><code class="language-bash">kubectl node-shell aks-cluster1-33854278-vmss000000
</code></pre><p>What has happened at this point is that a <code>nsenter</code> pod has been created for you on the node. When you exit the <code>node-shell</code> session the pod will be removed automatically.</p><p>Package the messages and syslog files up in a tar ball. Here I am using GZip compression. Note that for distinction I am using the pattern of <code>&lt;node name&gt;.&lt;log file type/name&gt;.tar.gz</code>. This identifies that we are creating a tar ball using gzip compression and which type of files for which machine.</p><pre><code class="language-bash">tar czf aks-cluster1-33854278-vmss000000.messages.tar.gz /var/log/messages*
tar czf aks-cluster1-33854278-vmss000000.syslog.tar.gz /var/log/syslog*
</code></pre><p><em><strong>Do not exit this terminal session yet.</strong></em></p><p>Now we need to leverage the pod that was created so we can transfer off the archived files. To do this we need to get the name of the pod. So here we will get a list of pods in the default namespace. We need to identify the pod having a name starting with <code>nsenter</code>.</p><pre><code class="language-bash">kubectl get pods --namespace default

NAME             READY   STATUS    RESTARTS   AGE
nsenter-9knomx   1/1     Running   0          14m
</code></pre><p>Now that we have the name of the pod we can copy the archives to your local workstation. Here we use the <code>kubectl cp</code> command. Note the path to the files created ealier are referenced following the <code>:</code> following the pod name. This is the same methodology used in Rsync and SCP.</p><pre><code class="language-bash">kubectl cp nsenter-9knomx:/tmp/aks-cluster1-33854278-vmss000000.syslog.tar.gz aks-cluster1-33854278-vmss000000.syslog.tar.gz
</code></pre><p>Now you can list the files you downloaded.</p><pre><code class="language-bash">#Linux
ls -l *.gz
-rw-r--r--  1 srhodes  staff  15280652 Apr 16 20:26 aks-cluster1-33854278-vmss000000.messages.tar.gz
-rw-r--r--  1 srhodes  staff  14671049 Apr 16 20:27 aks-cluster1-33854278-vmss000000.syslog.tar.gz
</code></pre><p>For non-*nix users, you can use <code>dir</code> in place of <code>ls</code>.</p>]]></content:encoded></item></channel></rss>