<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Connakers SRE | DevOps | Cloud Journey]]></title><description><![CDATA[SRE | DevOps | Cloud]]></description><link>https://devblog.connaker.org</link><generator>RSS for Node</generator><lastBuildDate>Mon, 13 Apr 2026 23:51:09 GMT</lastBuildDate><atom:link href="https://devblog.connaker.org/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Deploy IaC in AWS using CI/CD pipeline]]></title><description><![CDATA[Hello and Welcome!
In this article, we will go over how to use CI/CD pipeline to deploy infrastructure into AWS utilizing an IaC.



Overview
As DevOps and SRE engineers, we ideally want to automate the deployment of infrastructure, applications and ...]]></description><link>https://devblog.connaker.org/deploy-iac-in-aws-using-cicd-pipeline</link><guid isPermaLink="true">https://devblog.connaker.org/deploy-iac-in-aws-using-cicd-pipeline</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Pipeline]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Michael Connaker]]></dc:creator><pubDate>Mon, 04 Jul 2022 15:08:29 GMT</pubDate><content:encoded><![CDATA[<p>Hello and Welcome!</p>
<p>In this article, we will go over how to use CI/CD pipeline to deploy infrastructure into AWS utilizing an IaC.
<br />
<br />
<br /></p>
<h3 id="heading-overview">Overview</h3>
<p>As DevOps and SRE engineers, we ideally want to automate the deployment of infrastructure, applications and code. For an SRE Engineer, that is our mantra - automate everything.</p>
<p>This is where Infrastructure as Code comes in. Infrastructure as Code, or IaC,  is the process of managing and provisioning data-centers or Cloud infrastructures through machine-readable definition files. Generally, IaC can be used as a source of truth and version controlling.</p>
<p>When it comes to deployment, you have Configuration Management and Orchestration. Configuration Management tools are primarily used to deploy and manage software, but can also be used to deploy infrastructure. Some examples include Ansible, Puppet and Chef.</p>
<p>Orchestration tools are used primarily to provision infrastructure. For orchestration tools, there are native tools for cloud services, such as AWS Cloudformation, Azure Resource Manager, Google Cloud Deployment Manager and open source tools like Terraform or Pulumi. </p>
<p>In this article, we will use A popular IaC open source orchestration tool called Terraform that will be used to generate the infrastructure and implemented directly through a CI/CD. Terraform is a great tool that is used for creating, changing, and versioning infrastructure safely and efficiently.</p>
<p>Terraform uses State Files, which keeps track of resources created by configuration and maps them to real-world resources.</p>
<p>State files must be preserved for later reference, modification, or destruction by subsequent deployments to a durable backend. Backends can be local or remote in something like s3. When working with multiple engineers, it is often a good practice to store it remotely.</p>
<p>Now, as said before as an SRE we want to automate everything. So how do we automate Terraform?</p>
<p>There are several solutions. There are open source tools like Atlantis, but the most efficient way would be through a CI/CD platform.</p>
<p>For this solution, we will use two AWS services to form the foundation of the CI/CD pipeline. For Continuous Delivery, we will use AWS CodePipeline. For Continuous Integration (CI), We will use AWS CodeBuild.</p>
<p>CodePipeline will help us automate our release pipeline through build test and deployment. CodeBuild compiles source code, runs tests, and produces software packages that are ready to deploy.</p>
<h3 id="heading-architecture">Architecture</h3>
<p>Naturally, when it comes to creating this, there will be several AWS services that we will need to use.</p>
<p>These include:</p>
<ul>
<li>IAM Roles and Policies</li>
<li>AWS CodePipeline</li>
<li>AWS CodeBuild</li>
<li>S3</li>
<li>DynamoDB Table</li>
<li>Container Registry</li>
</ul>
<p>Outside of AWS, we will be using Terraform and GitHub.</p>
<p>When we are finished, this is what our infrastructure will look like:
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/55pn8566oapxw8mt1heq.jpg" alt="Image description" /></p>
<h3 id="heading-requirements">Requirements</h3>
<p>Naturally, there are some requirements in order to get started. We will not go over how to do these requirements, but I will list them below.</p>
<p>First, you'll need an AWS account and have your aws config and credentials configured on your local machine.</p>
<p>Second, you will need a Github account.</p>
<p>Third, you will need terraform on your local machine.</p>
<p>Alright, lets get started.</p>
<h3 id="heading-build-out">Build out</h3>
<p>Our first stop will be in Github. We'll want to create a repository that will store our terraform IaC that will be used to deploy infrastructure to AWS from the Pipeline.</p>
<p>To save some time, here is a <a target="_blank" href="https://github.com/mconnaker/aws-terraform-pipeline/tree/main/example_repo">Example repository</a>.</p>
<p>let's take a look at the repository. Reviewing main.tf, we see that we'll be deploying vpc's into AWS. Reviewing variables, you will see the CIDR and vpc names.</p>
<p>There are two other significant files - <code>terraform_plan.yml</code> and <code>terraform_apply.yml</code>. These are the buildspec's that will be used to plan and apply infrastructure to AWS using AWS Codebuild. They are required to be in the root directory; however, you can specify a separate subdirectory by using <code>subdirectory/yml file</code>. Without these files, AWS Codebuild will fail.</p>
<p>lastly, let's take a look at our provider.tf. Note that we do not have a backend. This will need to be created. </p>
<p>Let's go ahead and do that now.</p>
<h4 id="heading-creating-terraform-backend-in-s3">Creating Terraform Backend in s3</h4>
<p>Like everything else, we want to automate deployment. For this, we will use terraform.</p>
<p>On your local machine, create a main.tf file. In here, we will create a s3 bucket and DynamoDB Table that will be used to store our state file and locks. In <code>aws_s3_bucket</code> resource, replace <code>terraform_state_bucket_name</code> with a name for your bucket. In <code>aws_dynamodb_table</code> resource, replace <code>app-state</code> with an easily identifiable name.</p>
<pre><code>provider <span class="hljs-string">"aws"</span> {
    region <span class="hljs-operator">=</span> <span class="hljs-string">"us-east-1"</span>
}

terraform {
    required_providers {
      aws <span class="hljs-operator">=</span> {
        source   <span class="hljs-operator">=</span> <span class="hljs-string">"hashicorp/aws"</span>
        version  <span class="hljs-operator">=</span> <span class="hljs-string">"~&gt; 3.0"</span>
      }
    }
}

resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"terraform_state"</span>{
    bucket      <span class="hljs-operator">=</span> <span class="hljs-string">"terraform_state_bucket_name"</span>

    lifecycle {
      prevent_destroy <span class="hljs-operator">=</span> <span class="hljs-literal">true</span>
    }
}

resource <span class="hljs-string">"aws_s3_bucket_versioning"</span> <span class="hljs-string">"terraform_state"</span> {
    bucket <span class="hljs-operator">=</span> aws_s3_bucket.terraform_state.id

    versioning_configuration {
      status <span class="hljs-operator">=</span> <span class="hljs-string">"Enabled"</span>
    }
}

resource <span class="hljs-string">"aws_dynamodb_table"</span> <span class="hljs-string">"terraform_state_lock"</span> {
  name           <span class="hljs-operator">=</span> <span class="hljs-string">"app-state"</span>
  read_capacity  <span class="hljs-operator">=</span> <span class="hljs-number">1</span>
  write_capacity <span class="hljs-operator">=</span> <span class="hljs-number">1</span>
  hash_key       <span class="hljs-operator">=</span> <span class="hljs-string">"LockID"</span>

  attribute {
    name <span class="hljs-operator">=</span> <span class="hljs-string">"LockID"</span>
    <span class="hljs-keyword">type</span> <span class="hljs-operator">=</span> <span class="hljs-string">"S"</span>
  }
}
</code></pre><p>now run <code>terraform init</code>, <code>terraform apply</code>. Congrats, you now have a backend s3 for your state file and DynamoDB table for your lock files.</p>
<p>Let's add this to your repository. Go ahead and fork the <a target="_blank" href="https://github.com/mconnaker/aws-terraform-pipeline/tree/main/example_repo">example repository</a> to your own repository, then clone it to your local machine.</p>
<p>Now, in the provider.tf file, let's update this with our backend information.</p>
<pre><code>terraform {
  backend <span class="hljs-string">"s3"</span>{
    bucket          <span class="hljs-operator">=</span> <span class="hljs-string">"terraform_state_bucket_name"</span>
    key             <span class="hljs-operator">=</span> <span class="hljs-string">"terraform.tfstate"</span>
    region          <span class="hljs-operator">=</span> <span class="hljs-string">"us-east-1"</span>
    dynamodb_table  <span class="hljs-operator">=</span> <span class="hljs-string">"app-state"</span>
  }
}
</code></pre><p>Note our key. A key is a path to the state file inside the S3 Bucket. Here, we call it terraform.tfstate. This does not exist yet, so terraform will create it later on.</p>
<h3 id="heading-creating-our-cicd-pipeline">Creating our CI/CD Pipeline</h3>
<p>Alright. now that we have our backend created, we are ready to move forward with building our pipeline. This can be done two ways - manually through AWS console or through IaC. Naturally as a SRE we want to automate everything.</p>
<p>Once again, we will spec it out using terraform. Again, to save time we do have a <a target="_blank" href="https://github.com/mconnaker/aws-terraform-pipeline">repository</a> with the files. Feel free to fork and clone this to your local machine.</p>
<p>Like a good engineer, let's go over what we are exactly building.</p>
<h4 id="heading-main-tf-file">Main tf file</h4>
<p>A review of main.tf will show that it is pretty busy. Lots of resources and information to digest. Let's dissect this a little to understand better what we're attempting to accomplish.</p>
<h5 id="heading-iam-roles-and-policy">IAM Roles and Policy</h5>
<p>We require an IAM Role for AWS CodePipeline and CodeBuild to use to assume to deploy infrastructure. Secondly, we need a policy that will grant this role some list, Get and Put access to our s3 buckets. Finally, we are giving the Role Power User Access to deploy the infrastructure.</p>
<p>Naturally, you may want to restrict this access further. For example, we could create IAM policies for multiple pipelines designed and restrict those policies to only implement ec2, eks or ecs deployments, or only deploy network infrastructure or to deploy IAM. However, for this project we are using one pipeline and as such, we will use Power User Access.</p>
<h5 id="heading-s3-buckets">s3 buckets</h5>
<p>Here, we are creating two buckets - one for the Pipeline to store artifacts and one for CodeBuild to store cache files. We are creating a private ACL and enabling versioning.</p>
<pre><code>resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"bucket"</span> {
  bucket <span class="hljs-operator">=</span> <span class="hljs-string">"tf-pipeline"</span>
}

resource <span class="hljs-string">"aws_s3_bucket_acl"</span> <span class="hljs-string">"acl"</span> {
  bucket <span class="hljs-operator">=</span> aws_s3_bucket.bucket.id
  acl    <span class="hljs-operator">=</span> <span class="hljs-string">"private"</span>
}

resource <span class="hljs-string">"aws_s3_bucket_versioning"</span> <span class="hljs-string">"versioning"</span> {
  bucket <span class="hljs-operator">=</span> aws_s3_bucket.bucket.id
  versioning_configuration {
    status <span class="hljs-operator">=</span> <span class="hljs-string">"Enabled"</span>
  }
}

resource <span class="hljs-string">"aws_s3_bucket"</span> <span class="hljs-string">"cb_bucket"</span> {
  bucket <span class="hljs-operator">=</span> <span class="hljs-string">"tf-pipeline-cb"</span>
}

resource <span class="hljs-string">"aws_s3_bucket_acl"</span> <span class="hljs-string">"cb_acl"</span> {
  bucket <span class="hljs-operator">=</span> aws_s3_bucket.cb_bucket.id
  acl    <span class="hljs-operator">=</span> <span class="hljs-string">"private"</span>
}

resource <span class="hljs-string">"aws_s3_bucket_versioning"</span> <span class="hljs-string">"cb_versioning"</span> {
  bucket <span class="hljs-operator">=</span> aws_s3_bucket.cb_bucket.id
  versioning_configuration {
    status <span class="hljs-operator">=</span> <span class="hljs-string">"Enabled"</span>
  }
}
</code></pre><h5 id="heading-codepipeline">CodePipeline</h5>
<p>lets break down the CodePipeline and review what it is doing.</p>
<p>First, we need to assign the above IAM role to our CodePipeline. This is done using <code>role_arn = aws_iam_role.role.arn</code>. This will allow CodePipeline to read and write to the s3 buckets.</p>
<pre><code>resource "aws_codepipeline" "codepipeline"{
  <span class="hljs-type">name</span> = "terraform-pipeline"
  role_arn = aws_iam_role.<span class="hljs-keyword">role</span>.arn
</code></pre><p>Next, we need to identify where we are storing our Artifacts. CodePipeline integrates with development tools to check for code changes and then build and deploy through all of the stages of the continuous delivery process. Stages use input and output artifacts that are stored in the Amazon S3 artifact bucket. For the CodePipeline, we will store them in the t<code>f-pipeline</code> bucket. Note further in we use the same input artifact as the output artifact from the source stage.</p>
<pre><code>resource <span class="hljs-string">"aws_codepipeline"</span> <span class="hljs-string">"codepipeline"</span>{
  name <span class="hljs-operator">=</span> <span class="hljs-string">"terraform-pipeline"</span>
  role_arn <span class="hljs-operator">=</span> aws_iam_role.role.arn

  artifact_store {
    location <span class="hljs-operator">=</span> aws_s3_bucket.bucket.bucket
    <span class="hljs-keyword">type</span>     <span class="hljs-operator">=</span> <span class="hljs-string">"S3"</span>
  }
</code></pre><p>Alright, on to stages. We will be using three stages for the CodePipeline. </p>
<p>Our first stage is our <strong>source</strong>. Source is where CodePipeline will see changes and implement them. For this, we will use GitHub version 2. To implement version 2, we use aws codestarconnections. We also have a variable for the repository name. This can be updated in the variables.tf.</p>
<p>Note further down that we have a resource that defines the codestarconnection as Github. When this terraform is deployed, you will need to set up the connection manually. We'll go over how to do that later.</p>
<pre><code>  stage {
      name <span class="hljs-operator">=</span> <span class="hljs-string">"Source"</span>

      action {
        name             <span class="hljs-operator">=</span> <span class="hljs-string">"Source"</span>
        category         <span class="hljs-operator">=</span> <span class="hljs-string">"Source"</span>
        owner            <span class="hljs-operator">=</span> <span class="hljs-string">"AWS"</span>
        provider         <span class="hljs-operator">=</span> <span class="hljs-string">"CodeStarSourceConnection"</span>
        version          <span class="hljs-operator">=</span> <span class="hljs-string">"1"</span>
        output_artifacts <span class="hljs-operator">=</span> [<span class="hljs-string">"source_output"</span>]

        configuration <span class="hljs-operator">=</span> {
          ConnectionArn    <span class="hljs-operator">=</span> aws_codestarconnections_connection.tf-pipeline.arn
          FullRepositoryId <span class="hljs-operator">=</span> <span class="hljs-keyword">var</span>.repositoryid
          BranchName       <span class="hljs-operator">=</span> <span class="hljs-string">"main"</span>
        }
      }
    }
</code></pre><p>The second stage is AWS CodeBuild. In this stage, we have two actions. Our first action will be to use CodeBuild to read the artifact and run the codebuild project that will run Terraform Plan. The second happens after Terraform Plan is ran. Here, we are seeking approval, which is done manually. This action <strong>must</strong> be approved before the next stage runs.</p>
<pre><code>  stage {

    action {
      name             <span class="hljs-operator">=</span> <span class="hljs-string">"Terraform_Plan"</span>
      category         <span class="hljs-operator">=</span> <span class="hljs-string">"Build"</span>
      owner            <span class="hljs-operator">=</span> <span class="hljs-string">"AWS"</span>
      provider         <span class="hljs-operator">=</span> <span class="hljs-string">"CodeBuild"</span>
      input_artifacts  <span class="hljs-operator">=</span> [<span class="hljs-string">"source_output"</span>]
      output_artifacts <span class="hljs-operator">=</span> [<span class="hljs-string">"tfplan_output"</span>]
      version          <span class="hljs-operator">=</span> <span class="hljs-string">"1"</span>

      configuration    <span class="hljs-operator">=</span> {
        ProjectName    <span class="hljs-operator">=</span> aws_codebuild_project.terraform_plan.<span class="hljs-built_in">name</span>
      }
    }

    name <span class="hljs-operator">=</span> <span class="hljs-string">"Terraform_Plan"</span>
    action {
      name             <span class="hljs-operator">=</span> <span class="hljs-string">"Terraform_Plan_Manual_Approval"</span>
      category         <span class="hljs-operator">=</span> <span class="hljs-string">"Approval"</span>
      owner            <span class="hljs-operator">=</span> <span class="hljs-string">"AWS"</span>
      provider         <span class="hljs-operator">=</span> <span class="hljs-string">"Manual"</span>
      version          <span class="hljs-operator">=</span> <span class="hljs-string">"1"</span>
    }

  }
</code></pre><p>The third stage is also AWS CodeBuild. In this stage, we use CodeBuild to read the artifact and run the codebuild project that will run Terraform Apply.</p>
<pre><code>  stage {
    name <span class="hljs-operator">=</span> <span class="hljs-string">"Terraform_Apply"</span>
    action {
      name             <span class="hljs-operator">=</span> <span class="hljs-string">"Terraform_Apply"</span>
      category         <span class="hljs-operator">=</span> <span class="hljs-string">"Build"</span>
      owner            <span class="hljs-operator">=</span> <span class="hljs-string">"AWS"</span>
      provider         <span class="hljs-operator">=</span> <span class="hljs-string">"CodeBuild"</span>
      input_artifacts  <span class="hljs-operator">=</span> [<span class="hljs-string">"source_output"</span>]
      version          <span class="hljs-operator">=</span> <span class="hljs-string">"1"</span>

      configuration    <span class="hljs-operator">=</span> {
        ProjectName    <span class="hljs-operator">=</span> aws_codebuild_project.terraform_apply.<span class="hljs-built_in">name</span>
      }
    }
  }
</code></pre><p>Finally, we have the CodeBuild projects. There are two of them - one for Terraform Plan and one for Terraform Apply. Both projects are essentially the same, with some minor changes.</p>
<p>Both use the same service role as CodePipeline and both store the cache in the s3 bucket <code>tf-pipeline-cb</code>. Both implement a general1 small container registry that pulls in a yaml file. Both use an environment variable. The key differences here is that one runs a Plan while the other runs Apply.</p>
<pre><code>resource <span class="hljs-string">"aws_codebuild_project"</span> <span class="hljs-string">"terraform_plan"</span> {
  name         <span class="hljs-operator">=</span> <span class="hljs-string">"Terraform-Plan"</span>
  service_role <span class="hljs-operator">=</span> aws_iam_role.role.arn

  artifacts {
    <span class="hljs-keyword">type</span> <span class="hljs-operator">=</span> <span class="hljs-string">"CODEPIPELINE"</span>
  }

  environment {
    compute_type    <span class="hljs-operator">=</span> <span class="hljs-string">"BUILD_GENERAL1_SMALL"</span>
    image           <span class="hljs-operator">=</span> <span class="hljs-string">"aws/codebuild/standard:3.0"</span>
    <span class="hljs-keyword">type</span>            <span class="hljs-operator">=</span> <span class="hljs-string">"LINUX_CONTAINER"</span>
    privileged_mode <span class="hljs-operator">=</span> <span class="hljs-literal">true</span>
    environment_variable {
      name  <span class="hljs-operator">=</span> <span class="hljs-string">"TF_COMMAND_P"</span>
      value <span class="hljs-operator">=</span> <span class="hljs-string">"plan"</span>
    }
  }
  cache {
    <span class="hljs-keyword">type</span>     <span class="hljs-operator">=</span> <span class="hljs-string">"S3"</span>
    location <span class="hljs-operator">=</span> <span class="hljs-string">"${aws_s3_bucket.cb_bucket.bucket}/terraform_plan/cache"</span>
  }
  source {
    <span class="hljs-keyword">type</span>      <span class="hljs-operator">=</span> <span class="hljs-string">"CODEPIPELINE"</span>
    buildspec <span class="hljs-operator">=</span> <span class="hljs-string">"terraform_plan.yml"</span>
  }
}
</code></pre><pre><code>resource <span class="hljs-string">"aws_codebuild_project"</span> <span class="hljs-string">"terraform_apply"</span> {
  name         <span class="hljs-operator">=</span> <span class="hljs-string">"Terraform-Apply"</span>
  service_role <span class="hljs-operator">=</span> aws_iam_role.role.arn

  artifacts {
    <span class="hljs-keyword">type</span> <span class="hljs-operator">=</span> <span class="hljs-string">"CODEPIPELINE"</span>
  }

  environment {
    compute_type    <span class="hljs-operator">=</span> <span class="hljs-string">"BUILD_GENERAL1_SMALL"</span>
    image           <span class="hljs-operator">=</span> <span class="hljs-string">"aws/codebuild/standard:3.0"</span>
    <span class="hljs-keyword">type</span>            <span class="hljs-operator">=</span> <span class="hljs-string">"LINUX_CONTAINER"</span>
    privileged_mode <span class="hljs-operator">=</span> <span class="hljs-literal">true</span>
    environment_variable {
      name  <span class="hljs-operator">=</span> <span class="hljs-string">"TF_COMMAND_A"</span>
      value <span class="hljs-operator">=</span> <span class="hljs-string">"apply"</span>
    }
  }
  cache {
    <span class="hljs-keyword">type</span>     <span class="hljs-operator">=</span> <span class="hljs-string">"S3"</span>
    location <span class="hljs-operator">=</span> <span class="hljs-string">"${aws_s3_bucket.cb_bucket.bucket}/terraform_apply/cache"</span>
  }
  source {
    <span class="hljs-keyword">type</span>      <span class="hljs-operator">=</span> <span class="hljs-string">"CODEPIPELINE"</span>
    buildspec <span class="hljs-operator">=</span> <span class="hljs-string">"terraform_apply.yml"</span>
  }
}
</code></pre><p>Alright, that was a lot to process. Before we deploy this, we need to make some updates to our pipeline repo.</p>
<h4 id="heading-providertf">provider.tf</h4>
<p>Before we run Terraform, we will need to update our backend. let's update this with our backend information.</p>
<pre><code>terraform {
  backend <span class="hljs-string">"s3"</span>{
    bucket          <span class="hljs-operator">=</span> <span class="hljs-string">"terraform_state_bucket_name"</span>
    key             <span class="hljs-operator">=</span> <span class="hljs-string">"terraform-pipeline.tfstate"</span>
    region          <span class="hljs-operator">=</span> <span class="hljs-string">"us-east-1"</span>
    dynamodb_table  <span class="hljs-operator">=</span> <span class="hljs-string">"app-state"</span>
  }
}
</code></pre><p>Notice that our key has changed. For the pipeline deployment, we will use a different tf state file over the one used for our example repo. This will keep any changes we make between the two separated and maintainable. If we use the same key, we will cause drift and taint as the example repo and pipeline repo are not together.</p>
<h4 id="heading-deployment">Deployment</h4>
<p>Alright, we are ready to run Terraform. With the changes made, lets run Terraform. Once deploy, go to AWS console and you should see something like this:</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4tvryqc2wtlbbte9p76z.png" alt="Image description" /></p>
<p>Now, we will need to set up that connection. In AWS, click on Edit. One Source Stage, click edit stage.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucs250i7n5gcytevryk6.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fj74ud7pm66w5ebhgu3.png" alt="Image description" /></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5asz3ev7yrsvc1hgkyte.png" alt="Image description" /></p>
<p>Click on connect to GitHub. Give the connection a name and click on Connect to Github. Follow the directions to authorize access and select the Example Repository with the VPC's that we want to deploy.</p>
<p>And that is it. The Pipeline should begin a release and run through the process. Remember that you'll need to manually approve before running the Apply. If it does not automatically run the release, click on Release Change.</p>
<p>Once this has finished running, jump over to the VPC section of your aws console and you should see two new VPC's</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apbfdr1qpoc0z3yizdiv.png" alt="Image description" /></p>
<h4 id="heading-recap">Recap</h4>
<p>In this article, we deployed a s3 backend and our CI/CD pipeline using Terraform. We connected our Source Stage to GitHub and ran the pipeline to successfully deploy two new VPC's.</p>
<p>Thank you for reviewing my article. Let me know if you have any questions and until next time!</p>
]]></content:encoded></item><item><title><![CDATA[A Cloud Guru Challenge - Improve Application Performance using ElastiCache Redis]]></title><description><![CDATA[Hey Guys!
Welcome to my next blog where we are covering another challenge presented by A Cloud Guru. For this challenge, we'll be improving application performance using ElastiCache Redis. The challenge can be found  here.
For this challenge, A Cloud...]]></description><link>https://devblog.connaker.org/a-cloud-guru-challenge-improve-application-performance-using-elasticache-redis</link><guid isPermaLink="true">https://devblog.connaker.org/a-cloud-guru-challenge-improve-application-performance-using-elasticache-redis</guid><category><![CDATA[Redis]]></category><category><![CDATA[AWS]]></category><category><![CDATA[challenge]]></category><dc:creator><![CDATA[Michael Connaker]]></dc:creator><pubDate>Sat, 12 Jun 2021 22:29:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1623536934024/5J0zVDVGM.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey Guys!</p>
<p>Welcome to my next blog where we are covering another challenge presented by A Cloud Guru. For this challenge, we'll be improving application performance using ElastiCache Redis. The challenge can be found  <a target="_blank" href="https://acloudguru.com/blog/engineering/cloudguruchallenge-improve-application-performance-using-amazon-elasticache?utm_source=discord&amp;utm_medium=social&amp;utm_campaign=cloudguruchallenge">here</a>.</p>
<p>For this challenge, A Cloud Guru has listed out steps that are required to complete the challenge.</p>
<p>They are:</p>
<ul>
<li>Deploy RDS PostgresSQL database instance</li>
<li>Deploy ec2 instance to run the app. Include python3 and its modules psycopg2, flask, configparser, redis. Add the python application to the server from GitHub.</li>
<li>Deploy ElastiCache Redis</li>
<li>Convert application to use Redis</li>
</ul>
<p>Github Application:
https://github.com/ACloudGuru/elastic-cache-challenge</p>
<p>My GitHub Submission:
https://github.com/mconnaker/acg-app-performance-challenge</p>
<p>The Goal:
The application itself has been artificially slowed to represent an under-provisioned database server. The goal of this challenge is to add a cache in front of the database, not to optimize the stored procedure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623535845906/VP1Y0l1FS.png" alt="image.png" /></p>
<p>Alright, let's go ahead and break this down. First, we'll need to deploy the resources in AWS. There are two ways we can do this - manually through the AWS Management Console or through IaC. Next, we'll need to make a few manual changes on the ec2, which include creating a proxy on the server and a folder where the application will reside in. Lastly, after doing initial testing, we'll modify the application to use Redis.</p>
<h3 id="deployment-using-infrastructure-as-code">Deployment using Infrastructure as Code</h3>
<p>In a nutshell, IaC is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. Using IaC allows your code to be the single source of truth, have version control, traceability/accountability and decreases mistakes made through manual processes. IaC also allows testing, monitoring and reviewing errors that happen when attempting to deploy the code.</p>
<p>For the Big 3 in Cloud Computing (AWS, GCP and Azure), each has its own version of IaC. AWS CloudFormation, Azure ARM Templates and GCP Cloud Deployment Manager. There are also open source deployment tools, such as Terraform, Ansible, Chef and Puppet.</p>
<p>For this challenge I will deploy my infrastructure using IaC utilizing Terraform. For version control, I will use Github. With Terraform, you can use modules or write it from scratch. Modules is a packaged Terraform configuration that makes it easy to deploy.</p>
<p>My personal experience is with Cloud Formation. For the challenge, I decided to write my Terraform code from scratch to better understand how the deployment process works. </p>
<p>The Terraform code I wrote deploys ec2, RDS, ElasticCache Redis, Keypair, VPC, subnet, nacl, route table, Security Groups and key pair. I also deployed .sh install configuration setup for the ec2, which can be done using terraform. The configuration file deploys Python3, the modules, postgresql, nginx, gcc, development tools and git. The file can also be used to clone the github file where the python application resides.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623536560658/eTUvyOGsR.png" alt="image.png" /></p>
<h3 id="post-deployment">Post Deployment</h3>
<p>After deploying the infrastructure using Terraform, a few changes on the ec2 must be done. These are more minor configurations, such as grabbing the repository for the application from GitHub, creating a folder in the home directory and configuring proxy in Nginx. </p>
<h4 id="pre-caching">Pre-Caching</h4>
<p>Once the app is running, the elapsed time is always above 5 seconds. This is again due to the fact that the application itself has been artificially slowed to represent an under-provisioned database server</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623531949166/LrY_A71fO.png" alt="image.png" /></p>
<p>Now that we've done the initial testing, we can add Redis cache to the application.</p>
<h4 id="caching-with-elasticache-redis">Caching with ElastiCache Redis</h4>
<p>Now that we know the application is working and the slowness is showing, we'll implement ElastiCache Redis to boost the performance. We'll do a cache-aside strategy, where we'll query the cache first to see if the data is there.</p>
<p>In Cache-aside strategy, the application looks to the cache (Redis) first, if no data is found it will query the database and populate the cache with the result.</p>
<p>We'll take a step further and add in an invalidation period using Time to Live (TTL). Invalidation is used when the relevant records are modified in the database, allowing the cache to stay fresh.</p>
<p>To this end, we'll make 3 simple modifications to the application.</p>
<p>First, we'll add in import json, import redis and create a variable called rcache that will have the information to access the ElastiCache Redis.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623532567646/HtnnV9AUY.png" alt="image.png" /></p>
<p>Second, we'll modify our code existing code. Our first step is to modify a slice of the code. We'll rename def fetch(sql) to def dbfetch(sql).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623532782704/u0FFvl5nz.png" alt="image.png" /></p>
<p>Finally, we'll add in Redis. the SQL statement is used as a key in Redis, and the cache is examined to see if a value is present. If a value is not present, the SQL statement is used to query the database. The result of the database query is stored in Redis. Finally, we'll use TTL to set how long the key will be stored until it expires. Once it hits the TTL, Redis will evict the key to free the associated memory.</p>
<p>example:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623533142751/uMB7_RgyB.png" alt="image.png" /></p>
<p>In order to get the SQL statement, we'll need to call on the function, def dbfetch(sql). We'll also use to json serialize and deserialize the results.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623533309014/xAoCyOR_4.png" alt="image.png" /></p>
<p>Once the app is modified and ran again, we'll see that the time elapse drops from 5 seconds to milliseconds. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1623533380032/8j6nm0Ij8.png" alt="image.png" /></p>
<p>And thats it.</p>
<h4 id="final-thoughts">Final Thoughts</h4>
<p>This challenge was very fun to work with. Coding in general is something I lack experience in, so I am glad I got to learn a few new things with python.</p>
<p>Having built the environment and reviewing it, I find that there are some improvements that could be had for this challenge / project. First is deploying into ECS and taking advantage of Docker containerization. With Docker, we could add Nginx and the application as two separate containers. Second would be using aws Code Pipeline, Deploy or Jenkins and pipeline for CI/CD into the ECS.  </p>
<p>Many thanks to A Cloud Guru for presenting this challenge.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Cloud Resume Challenge]]></title><description><![CDATA[AWS Cloud Resume Challenge
website: https://cloudresumechallenge.dev/instructions/
So, I am a little late to the party on the Cloud Resume Challenge. The Cloud Resume Challenge started on April 23, 2020 and had certain requirements and conditions tha...]]></description><link>https://devblog.connaker.org/aws-cloud-resume-challenge</link><guid isPermaLink="true">https://devblog.connaker.org/aws-cloud-resume-challenge</guid><category><![CDATA[AWS]]></category><category><![CDATA[challenge]]></category><dc:creator><![CDATA[Michael Connaker]]></dc:creator><pubDate>Sat, 15 May 2021 18:56:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1621104934556/CvgP2lqYi.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="aws-cloud-resume-challenge">AWS Cloud Resume Challenge</h1>
<p>website: https://cloudresumechallenge.dev/instructions/</p>
<p>So, I am a little late to the party on the Cloud Resume Challenge. The Cloud Resume Challenge started on April 23, 2020 and had certain requirements and conditions that had to be met.</p>
<p>Those conditions were:</p>
<ul>
<li>HTML/CSS based website</li>
<li>Hosted in S3 (Static Website)</li>
<li>HTTPS</li>
<li>DNS</li>
<li>JavaScript</li>
<li>Database</li>
<li>API</li>
<li>Python</li>
<li>Infrastructure as Code</li>
<li>Source Control</li>
<li>CI/CD (Frontend / backend)</li>
</ul>
<h2 id="why-take-the-challenge">Why take the challenge</h2>
<p>For me, the challenge meant learning more about the components of AWS. While I have worked in AWS fluidly, My core work is in ec2, RDS, iaaC, VPCs, CloudWatch, CloudTrail, Security Groups, and IAM. This meant there were still services that either I do not know or have a bit of working knowledge in, but never done fron scratch - coding in JavaScript and Python, configuring API Gateways, configuring CloudFront, Amazon Certificate Manager, and Lambda Functions.</p>
<h2 id="completing-the-challenge">Completing the Challenge</h2>
<p>I completed the challenge by breaking down what was needed. I also skipped over using IaaC in this challenge only because I wanted to learn and understand the services I would be using in the Management Console. I will most likely go back over this and build a proper IaaC and redeploy the API in the static website to point to it later on.</p>
<p>Breaking this down, I needed:</p>
<p>PageCount Counter:</p>
<ul>
<li>DynamoDB</li>
<li>Lambda Function</li>
<li>API Gateway</li>
<li>Scripts</li>
</ul>
<p>Website:</p>
<ul>
<li>HTML/CSS</li>
<li>s3</li>
<li>CloudFront</li>
<li>Amazon Certificate Manager</li>
<li>Domain</li>
<li>DNS</li>
</ul>
<h3 id="pagecount-counter">PageCount Counter</h3>
<h4 id="dynamodb">DynamoDB</h4>
<p>The DynamoDB was a excellent choice for this challenge. It is a key-value and document database, severless and does not require a lot of deep knowledge of querying and database. it is simple in design and easy to create. With DynamoDB, I added two items; the Primary Key with a string value that is looked for in the python script and a second one that is used to update the count.</p>
<h4 id="scripts">Scripts</h4>
<p>I am not a coder by any stretch of the word, so I'd like to thank Don Cameron who also did this challenge and added his script in his documentation. This helped me tremendously.</p>
<h4 id="lambda-function">Lambda Function</h4>
<p>For the Lambda Function, I created one from scratch using Python 3.7. I then applied Don's code to the Lambda Function.</p>
<h4 id="api-gateway">API Gateway</h4>
<p>I created a API REST Gateway that was connected to the Lambda Function. The API Gateway was configured with a GET Method and configured with CORS. </p>
<h3 id="website">Website</h3>
<h4 id="html-css">HTML / CSS</h4>
<p>For this challenge, I used a template CSS with slight modifications to the css code. I additionally, with A slight modification to Don's code, I was able to add the API gateway URL to the script code he had.</p>
<h4 id="s3">s3</h4>
<p>I created two s3 buckets - <code>connaker.org</code> and <code>www.connaker.org</code>. I configured the s3 bucket for <code>connaker.org</code> as a static website. <code>www.connaker.org</code> was then configured as a redirect to <code>connaker.org</code> under the static website. <code>connaker.org</code> was updated to be public with a s3 policy to allow public access. Using GitHub Actions, I deployed the website through Github to <code>connaker.org</code></p>
<h4 id="domain-certificate-manager-and-cloudfront">Domain, Certificate Manager and CloudFront</h4>
<p>I own the domain <code>connaker.org</code>. I created a Certificate in Amazon Certificate Manager and added this as a CNAME to the Google Domain's DNS records. After validation, I configured Cloudfront with alternate domain names (CNAMES) <code>*.connaker.org</code> and used the custom SSL Certificate I created in ACM. I added the Origin as the S3 buck's website path (NOT the s3 bucket name) and configured the Behaviors for HTTP and HTTPS. I will update the Behaviors for Redirecting HTTP to HTTPS.</p>
<p>Finally, I added the CNAME record for <code>awsrc.connaker.org</code> to point to the CloudFront Domain Name.</p>
<h4 id="github-and-github-actions">GitHub and GitHub Actions</h4>
<p>I deployed a public repo (https://github.com/mconnaker/awsrc) where I configured GitHubActions with a Access Keys and Secret Access Keys of IAM user profile. Any changes made for the frontend (website) on my local machines that are pushed to GitHub are automatically pushed to the s3 bucket.</p>
<h2 id="what-was-the-hardest-part">What was the Hardest Part</h2>
<p>The hardest part for me was setting up CloudFront, ACM to work with Google Domains. Turns out, I had most of CloudFront and ACM right, just understanding how to add the CNAME correctly into Google Domains DNS records was confusing.</p>
<h2 id="which-part-did-i-enjoy">Which Part did I enjoy?</h2>
<p>I enjoyed everything about this. It was fun diving and working on these particular services from scratch.</p>
<h2 id="my-submission">My Submission</h2>
<p>website: https://awsrc.connaker.org <br />
github: https://github.com/mconnaker/awsrc <br />
diagram:
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1621106474251/3MlWmxMcP.jpeg" alt="AWSRC.jpg" /></p>
]]></content:encoded></item><item><title><![CDATA[A Cloud Guru - Azure Cloud Resume Challenge!]]></title><description><![CDATA[Hey Guys!
Today's blog is covering a challenge. Yep, presented by A Cloud Guru! In this challenge, you are to build a resume.
Seems simple enough, right? If only... 
Lets get started:
A Cloud Guru Presents: "Azure Cloud Resume Challenge of 2021"
The ...]]></description><link>https://devblog.connaker.org/a-cloud-guru-azure-cloud-resume-challenge</link><guid isPermaLink="true">https://devblog.connaker.org/a-cloud-guru-azure-cloud-resume-challenge</guid><category><![CDATA[Azure]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[challenge]]></category><dc:creator><![CDATA[Michael Connaker]]></dc:creator><pubDate>Sat, 15 May 2021 14:35:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1621089223588/KXLV617jc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey Guys!</p>
<p>Today's blog is covering a challenge. Yep, presented by A Cloud Guru! In this challenge, you are to build a resume.</p>
<p>Seems simple enough, right? If only... </p>
<p>Lets get started:</p>
<h2 id="a-cloud-guru-presents-azure-cloud-resume-challenge-of-2021">A Cloud Guru Presents: "Azure Cloud Resume Challenge of 2021"</h2>
<p>The challenge was dropped by none other than Gwyneth Pena-Siguenza over at A Cloud Guru. The challenge required you to create a hundred percent Azure-hosted version of your resume. Part of the requirements is to build it in HTML or CSS, create a JavaScript code with a API that interacts with a database. The website must be hosted in Azure Blob Storage, must have enabled HTTPS and custom domain support and finally, must have Github Actions running.</p>
<p>website: https://acloudguru.com/blog/engineering/cloudguruchallenge-your-resume-in-azure?utm_source=discord&amp;utm_medium=social&amp;utm_campaign=cloudguruchallenge</p>
<h2 id="why-take-the-challenge">Why take the challenge</h2>
<p>Currently, work in AWS as a consultant and manager service provider at the company I work for. I have little experience with Azure Cloud and felt that this challenge was a great way to build my skills in Azure. I'm also someone that is continuing to learn upon DevOps and scripting skills, so I felt this was a great opportunity to build on those skillsets as well.</p>
<h2 id="completing-the-challenge">Completing the Challenge</h2>
<p>Having never worked in Azure, I completed this challenge by first understanding the Azure Environment. I took the AZ-900 Azure Fundementals course in A Cloud Guru and found similiarities between Azure and AWS. I also went ahead and took the certification exam and passed it.</p>
<p>After doing this, I had a basic understanding of the components within Azure.</p>
<p>I started off first by creating myself a Resource Group where all of my services would reside in. Next, I broke down the services I would need to complete the challenge. I broke this down between the backend counter and frontend website.</p>
<p>Backend - Visitcount Counter:</p>
<ul>
<li>CosmosDB (Serverless, NOSQL database)</li>
<li>Azure Functions (Node.js, LTS 12)</li>
<li>API Gateway</li>
<li>Script (node.js)</li>
</ul>
<p>Frontend - Website</p>
<ul>
<li>HTML/CSS</li>
<li>Blob Storage</li>
<li>CDN</li>
<li>Domain</li>
<li>DNS</li>
</ul>
<h2 id="backend-visitcount-counter">Backend - Visitcount Counter</h2>
<h3 id="cosmosdb">CosmosDB</h3>
<p>For this challenge, I chose to use CosmosDB. Cosmos DB is a serverless, NOSQL database that is simple to use and easy to configure. The challenge did not require anything overly complex, such as MongoDB, postgresql or MySQL.</p>
<p>CosmosDB does require some understanding on how to be deployed. First, I need to create the CosmosDB Account. This will identify what type of API (Core (SQL), Azure COsmosDB for MOngoDB APi, Cassandra, Azure Table, Gremlin) you will be using, Capacity mode (Provisioned throughput, serverless) and other options.</p>
<p>For API, I used Core SQL and for Capacity mode, I chose Serverless.</p>
<p>After creating the account, you'll needed to create the database. To do this, you'll need to create a Container. At the time of creating the container, you will need to give a name for the database id, container id and partition key.</p>
<p>After creating the container and database, you will need to create a Item within the database.</p>
<p>New Item:</p>
<pre><code>{
    <span class="hljs-attribute">id</span>:<span class="hljs-string">"id name"</span>,
    count: <span class="hljs-number">0</span>
}
</code></pre><p><br />
And that is it
<br /><br /></p>
<h3 id="azure-functions">Azure Functions</h3>
<p>For Azure Functions, I decided to create a very basic script using node.js, LTS 12. Creating a Function is not straight forward and does require some research.</p>
<p>I created the Function within Portal over using Visual Studio Code or CLI. I found working with VSC and CLI was difficult and for this challenge, the visualization with Integration and manual adding/adjusting the Code in Code+Test was easier.</p>
<p>When creating the function, you can give identify the runtime stack and verison. In this case, I used node.js and 12 LTS. This will create the basic function account where your function will live. I used HTTP Trigger and for authorization level, used anonymous.</p>
<p>After creating the function, Inputs and outputs had to be added. To this, I went through the Integration section of my script. I created a Input and output to CosmosDB databse I created earlier.
<br /><br /></p>
<h3 id="scripts">Scripts</h3>
<p>For the script, I used a basic js script to count.</p>
<p>Code Deployed:</p>
<pre><code><span class="hljs-built_in">module</span>.exports = <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">context, req, data</span>) </span>{
    context.log(<span class="hljs-string">'JavaScript HTTP Trigger function processed a request.'</span>);

    context.bindings.outputDocument = data[<span class="hljs-number">0</span>];
    context.bindings.outputDocument.count += <span class="hljs-number">1</span>;

    context.res = {
        <span class="hljs-attr">body</span>: data[<span class="hljs-number">0</span>].count
        };
}
</code></pre><p><br /><br /></p>
<h4 id="api-gateway">API Gateway</h4>
<p>With Azure Functions, there isn't a need to create a API Gateway. In the Code + Test, there is a Get Function URL button. With this URL, we can add it to a main.js script for the frontend to call on the API.</p>
<h2 id="frontend-website">FrontEnd - Website</h2>
<h3 id="htmlcss">HTML/CSS</h3>
<p>For this challenge, I used a template CSS with slight modifications to the css code</p>
<h3 id="blob-storage">Blob Storage</h3>
<h3 id="cdn">CDN</h3>
<p>Your CDN Endpoints will live in a CDN profile. You will create a new CDN Profile, giving it a name and selecting the pricing tier. During profile creation, you can also create the CDN endpoint with its origin and origin hostname.</p>
<p>After this is created, then it is setting up a custom domain.</p>
<h3 id="domain-dns-and-ssl-certificate">Domain, DNS and SSL Certificate</h3>
<p>I own the domain <code>connaker.org</code>. With Azure, there isn't a need to do any certificate management as this will be handled by the CDN. Your first step will be to add the endpoint hostname as a CNAME record to the subdomain you wish to use (such as <code>acgarc</code>). Once this is created, you can go to the CDN you created earlier, click on it and add a custom domain. my custom domain is <code>acgarc.connaker.org</code>. Azure DNS should recongize this from the CNAME. After that, you can set it up to allow HTTPS on custom domains, create a CDN managed Certificate using TLS 1.2. Azure CDN will the create a certificate for you.</p>
<h2 id="what-was-the-hardest-part">What was the Hardest Part</h2>
<p>The hardest part for this challenge was configuring the Azure Functions. It was difficult as I have no practical experience or knoweldge in creating custom scripts. I also tried to complete the challenge using Visual Studio Code and had a lot of issues. I originally tried using Gwyn's C# code and found it difficult to modify and update to my needs.</p>
<p>Settling on a simple node.js script file was the easier solution. Secondly was using Portal to create the integrations and adding the Code block.</p>
<h2 id="which-part-did-i-enjoy">Which Part did I enjoy?</h2>
<p>I enjoyed everything about this. It was fun diving and working on these particular services from scratch.</p>
<h2 id="my-submission">My Submission</h2>
<p>website: https://acgarc.connaker.org <br />
github: https://github.com/mconnaker/Azurechallenge <br />
diagram:</p>
]]></content:encoded></item></channel></rss>