Writing Tech Blogs are Hard Work

I once read an article that said more people need to write technical blogs. That the problem with much of the technology field was that people did not write in-depth articles on the stuff that they are doing. And, if more people were able to take the time and post a blog entry here and there we would all be better for it. As nice as that sounds, I have to say that writing technical posts are difficult, time consuming, and can quickly go out of date.

First off, writing in general is not easy, and that is before you get to adding the technical part on top of it. There are a number of brilliant developers, system engineers, and devops people that can create some of the most complex and unique solutions to problems, and yet cannot begin to write the first bit of documentation or narratives to describe what it is that they have done. It is not that they are dumb, they just have not honed the writing skill, or just might not be proficient at it. Writing is like coding, if you don’t use it, you can lose it. Also, it is a skill that can be developed and honed overtime. Myself, my writing skills are rusty after taking a long time off from it, and I am trying to get back into the flow.

There are some people that recommend to become talented at writing, you need to be able to dedicate an hour a day to it, or 5 hours a week. And that is just the writing part. That does not tie in working on the technical elements that are needed to provide content for the audience you are trying to reach. Maybe a few years back when I did not have family obligations this would have been possible, but now I have to sneak in time here and there. And, for the casual tech blogger, this is not going to be the case. Unless you are doing a lot of writing for work, there is little time to develop your writing skills. This leads many people to turn away from writing a technical article even though they might have some of the best ideas, if only they could get them into a usable format.

Now, the next big item on why it is so hard to write technical articles is that gathering together the technical information is not easy. Don’t get me wrong, that is not to say that there are not a number of topic that you can write on. That is far from the truth. There are probably thousands of areas and topics that can be written about. But, just like writing, it takes time to gather that information. Now, there are a few high profile bloggers that are able to dedicate their jobs to writing technical blogs. Many of them are evangelist or full time employees whose job it is to talk about certain technical areas. That is great for them, and to honest, I am a bit envious. For the average person devops engineer or develop it is not so easy.

There are some companies that will allow you to blog on the work you are doing, but for most people that is not the case. So, on top of your full time job, you then most go and use your own resources and your own time to work on getting together the data, the code, and whatever else, to get together the technical information just to begin writing the article. Once the work has been done to vet the project or the topic that you are looking at writing about, you have to circle back around and figure out how it is that you want to get the information together to put it in a story for others to read. This goes back to item number one, and I have already mentioned how difficult even that first step is. Now we are taking it to a higher level by saying you do not just have to write a story, but that you must structure it around the technical information. 

So, after getting the data together and know what you are going to write about, you have to structure it. There are screen shots that have to be made, code snippets that must be shared, links to technical information that must be included. As easy as all of that sounds, it is much harder than it sounds. Figuring out how to clip an image or not make it be 4 megs in size, or how to get the images aligned so that it all looks correct. Then there is the question of where do you put your code you are going to share. How do you get the numbers showing on code? How do you enable syntax highlighting on the code samples? All of that is not easy. All of that can be overwhelming when you just want to write an article to share some information with other techies.

And the last fact of the matter is that you never know if all your efforts are for naught. You could write a great article, but if people don’t find out about it, then what do you do? Do you keep plugging away and hope people will stumble upon the articles you have written? Nobody wants to do a bunch of work for nothing. So, in recap, writing technical articles is difficult. It can be hard, and it is not always rewarding. I thank all the people that stick with it, but I understand all of those who don’t.

Configuring AWSCLI and Python on Windows

Trying and doing stuff on a Windows 10 machine has become a rather interesting experiment. It started as a place to be able to play video games and have access to a few programs that are not easily available on Linux, to a test of seeing if I could now do all the things on Windows 10 that I could do on Linux.

Turns out, I wanted to test out the Cloud Directory service from AWS. I figured the 2 easiest ways to do this would be via the AWS CLI and Python. It did not occur to me that I had neither of these installed until I opened up Cmder.exe and typed

aws
and then
python
and both came back with the not found.

Wait, what? Where are my programs!

So, now I need to install and configure both of these. The test is to see how easy or difficult it is to get this setup on this Windows machine. Quick list of the normal steps that I take to install the AWS CLI on most any Linux machine.

  1. Install Python
  2. Configure a virtual environment to hold my cli tools
  3. Use pip to install aws cli
  4. Configure aws cli
  5. Test that it all works

The first step in setting all of this up is to get Python installed on your machine. The AWS cli is based on Python, and as such you need to have python installed in order to use it. Now, there are some that will install the AWS cli to the root of the machine, and use the system’s globally installed Python. Due to having worked on multiple versions of Python at the same time, and projects that use different libraries, I almost always setup an Virtual Environment to run my Python programs and other sundry programs from. This way, I don’t cross contaminate my streams, and have a clearly defined idea about which versions I am using on different projects.

Installing Python

This is a relatively straightforward task. Click on the Python installer that suits your needs, download it, and follow the install prompts. I chose Python 3.6.7 because it is the version that I am already using when running some Lambda programs in AWS, and because there are some new changes in 3.7 that have broken a few other libraries. On big one is ‘async’ and ‘await’ now becoming keywords. Follow the prompts to install Python and the restart your favorite command line tool. I run bash via git, and use cmder.exe as my shell program.

Once you have it installed you should be able to run the following to verify that you have install python on your workspace. python --version This should output ‘Python 3.6.7’ or whichever version of  Python that you installed.

Setting up Virtual Environment and AWS CLI

The next part is to install the virutal environment and to then use that to install the aws cli. This should be able to be done with just a few commands, and then you should be up and running.  First we run python and install the virtual environment. Then we activate the environment and install the AWS cli. It is just a few simple commands, and you should then be up and running.

c:\ericv\dev\python -m venv p367
c:\ericv\dev\> p367\Scripts\activate.bat
(p367) c:\ericv\dev\> pip install awscli
(p367) c:\ericv\dev\> aws help

And bamm! you are done. Now, there is always configuring aws to use params, but that is another issue. But, it took me longer to write this up, than it took me to do the install.  That in of itself is a good thing to know. Now, the question is if I will run into any more problems. But, so far so good.


DevOps is more than just about developers

There. It has been said. Take a moment, and think about it.

There is a push in the community to talk about Continuous Integration / Continuous Deployment (CI/CD), and tying this all to the DevOps movement. About what practices are used to get code to market the fastest, and with minimal amount of code defects. And, I will admit, that I agree wholeheartedly, that CI/CD or some sort of pipeline is important to allow software development teams to move fast and innovate. Or, to just roll out code without doing it every 6 months. I would almost argue that CI/CD is more akin to Agile and Scrum methodologies than it is to DevOps.

When it comes to DevOps, there is a side to this equation that is not commonly being addressed, Infrastructure and Operations.

Infrastructure Ops

InfraOps, whatever you want to call it. This is the glue that holds together all the services that applications run on top of. It is the items that people take for granted when it works, but cry foul when things go down. When servers go down at 2 a.m. InfraOps is what keeps the lights on.

It is easy to dismiss this side of the DevOps house. But, lets quickly discuss all the items that we need to cover here.

Infrastructure Operations List

  • System Deployment
    • Operating systems
    • Patching
  • Virtualization (Docker, AWS, Azure)
  • CMDB
  • Monitoring
  • Software Deployment
    • 3rd Part
    • In house
  • Load Balancers
  • Hardware Failover
  • Disaster Recovery
  • Caching Servers
  • Database Systems
  • Development Environments
  • Backups

This is by no means an exhaustive list of all the items that must be handled for a solution to be properly deployed and supported. But, the idea is to bring to light the side of the DevOps world that is often overlooked. It does not matter how good your code is if you don’t have an infrastructure that will stay online and support your applications. Often this is an overlooked and under staffed area that needs more attention.

Automation is king

The only way to ensure that the Infrastructure is able to support ongoing systems, deployments, and updates is with automation. Every time that a machine is built by hand, a configuration file is manually changed, or updates are performed manually, you have introduced risk into your environment. Each of these interactions is potentially a slow down in the pipeline, or worse, a manual change that has never been documented.

There are a number of tools and processes wrapped around handling Infrastructure Automation: Chef, Ansible, cfEgine, Salt. The list goes on. In some places there people have rolled their own. It does not matter as much about the tool, as long as we are moving away from manual intervention to a more dynamic scalable infrastructure. These tools all require an in-depth knowledge of underlying systems as well as the ability to code. But the end goal is DevOps on the infrastructure as well as on the Development side of the house.

There are places where tools are lacking to help support of automation. While new SaaS Solutions are filling some of these gaps, if you need to run your monitoring system in house, many of the solutions that currently exist are not built around dynamic infrastructure and ephemeral machines. The tools need to catch up with the needs of the users. In the meantime, the back end DevOps guys write workarounds to handle situations like this.

Moving Forward

Moving forward, let us try and look at all aspects of DevOps, from development to application deployment, and from hardware to infrastructure and scaling design. There is still much work to be done, but we are moving in the correct direction.

Building a Windows 2012R2 Instance in AWS with Terraform

Terraform is an application by HashiCorp that is designed to treat infrastructure as code. Lately, I have been working with it to begin automation of resources within AWS, and have been quite pleased.

Lets get started with building out a Windows 2012 R2 server with Terraform on AWS.

You are going to need to have the following items configured in AWS in order for this to work, as I am not going to be using Terraform to build out these items.

Since the purpose of this test is not to create a VPC, subnets, security_groups, etc, all of those will need to be created beforehand. You will then need the identifiers from these later, for use in building out the server. I always recommend building out your VPC in its own stack, and never mixing it with others. It is a vital piece of the infrastructure that should be touched as little as possible.

Items Needed for this Demo:

  • IAM Instance Role
  • VPC Security Group ID
  • Subnet ID
  • Key to use for Instance Creation

For this example you are going to need just two files. These files are the variables.tf and main.tf. Technically, you can name them any good old name you want, as long as they end in .tf. I almost think it is more confusing to go with main.tf and variables.tf. As such, I am going to go ahead and change the name of the files. I use Atom for editing my files, and so descriptive filenames are nice.

mkdir stand_alone_windows_2012
cd stand_alone_windows_2012
touch win2012_test_instance.tf
touch win2012_test_variables.tf

Now let us add the file that we are going to need for the variables. (I will admit that there are some values hard coded into the second file, but that is because I have been testing this to get it working.

variable "admin_password" {
  description = "Windows Administrator password to login as."
}

variable "aws_region" {
  description = "AWS region to launch servers."
  default = "us-west-2"
}

# Windows Server 2012 R2 Base
variable "aws_amis" {
  default = {
    us-east-1 = "ami-3f0c4628"
    us-west-2 = "ami-b871aad8"
  }
}

variable "key_name" {
  description = "Name of the SSH keypair to use in AWS."
  default = {
    "us-east-1" = "AWS Keypair"
    "us-west-2" = "AWS Keypair"
  }
}

variable "aws_instance_type" {
  default = "m4.large"
}

variable "aws_subnet_id" {
  default = {
    "us-east-1" = "subnet-xxxxxxxx"
    "us-west-2" = "subnet-xxxxxxxx"
  }
}

variable "aws_security_group" {
  default = {
    "us-east-1" = "sg-xxxxxxxx"
    "us-west-2" = "sg-xxxxxxxx"
  }
}

variable "node_name" {
  default = "not_used"
}

You will need to go through this file, and update the variables as needed, and create any resources that you do not happen to have. This would include subnets and vpc based security groups.

The next file is win2012_test_instance.tf. This is what does all the heavy lifting. In my example, I am also installing chef, but that is because I plan on automating my entire infrastructure, not just server creation.

# Specify the provider and access details
provider "aws" {
  region = "${var.aws_region}"
}

data "template_file" "init" {
    /*template = "${file("user_data")}"*/
    template = <
  winrm quickconfig -q & winrm set winrm/config/winrs @{MaxMemoryPerShellMB="300"} & winrm set winrm/config @{MaxTimeoutms="1800000"} & winrm set winrm/config/service @{AllowUnencrypted="true"} & winrm set winrm/config/service/auth @{Basic="true"}


  netsh advfirewall firewall add rule name="WinRM in" protocol=TCP dir=in profile=any localport=5985 remoteip=any localip=any action=allow
  $admin = [ADSI]("WinNT://./administrator, user")
  $admin.SetPassword("${var.admin_password}")
  iwr -useb https://omnitruck.chef.io/install.ps1 | iex; install -project chefdk -channel stable -version 0.16.28

EOF

    vars {
      admin_password = "${var.admin_password}"
    }
}

resource "aws_instance" "win2012_instance" {
  connection {
    type = "winrm"
    user = "Administrator"
    password = "${var.admin_password}"
  }
  instance_type = "${var.aws_instance_type}"
  ami = "${lookup(var.aws_amis, var.aws_region)}"
  key_name = "${var.key_name}"
  tags {
    Name = "MY_DYNAMIC_STATIC_NAME",
    Env = "TEST"
  }
  key_name = "${lookup(var.key_name, var.aws_region)}"
  iam_instance_profile = "STATIC_ROLE_NAME_SHOULD_BE_A_VARIABLE"
  tenancy = "dedicated"
  subnet_id = "${lookup(var.aws_subnet_id, var.aws_region)}"
  vpc_security_group_ids = ["${lookup(var.aws_security_group, var.aws_region)}"]
  /*user_data = "${file("user_data")}"*/
  user_data = "${data.template_file.init.rendered}"
}

Terraform vs Cloudformation — First Thoughts

Building out AWS infrastructures by hand is not something that should be taken on lightly. Just as in building out a Data Center or service in any cloud environment (Google, VMWare, etc), building out a bunch of systems by hand is a good way to inherit a bunch of technical debt and a headache or three. On top of that, it is slow and cumbersome to build out servers in a manual fashion, and since it is done manually, it is easy to make mistakes and introduce snowflakes into your environment. Instead, we need to address the build out with automation tooling.

There are a number of options to build out an infrastructure in AWS. As such, this is not to say that Terraform and Cloudformation are the only options. However, they are two options that have decent support, are mature, and are used by more than one person. Another option is to roll your own solution going straight against the AWS api in any number of languages. That however, is potentially more trouble than it is worth. Maybe later we can discuss ways to build out and probe an environment with your own tools, but for now let us stick to talking about Terraform and CloudFormation.

What are CloudFormation and Terraform? 

CloudFormation is a tool written by Amazon Web Services as a way to create and control a collection of resources within AWS. It is under continuous development and improvement by the AWS team, for use on AWS.

Terraform is a product produced by Hashicorp. The goal is a tool that is designed to treat your infrastructure as code. It is designed to work with multiple cloud providers, and be versioned under source control like any of your code projects.

CloudFormation

I have used cloud formation to build out entire environments, single servers, and for testing. It is a very useful tool for what it does, with a few limitations.

 

CloudFormation is AWS specific. Very shortly after a new service comes out, it is supported via CloudFormation. This means that almost any service you are going to want to use with CloudFormation will be available.

There are a plethora of examples on how to use CloudFormation. AWS is great about providing examples, and they do not fall short when it comes to Cloudformation. They have examples on how to use various services, and even how to integrate with Chef and Puppet.

A downside of CloudFormation is that if you don’t manage all your services via CloudFormation you can end up in a state where you cause CloudFormation to get into a hung state. This can happen if you delete a resource that was created in CloudFormation.

Terraform

Terraform has been designed to be cloud agnostic with different providers. This means that if you are in a mixed environment, you can use the same tool to build out your infrastructure on AWS, GCC, or Azure. This is definitely a plus if you are not dedicated to AWS, but this can be a disadvantage if you want to use Terraform to manage a new service.

Because Terraform is not designed specifically for AWS, you may end up in a situation where you will have to write your own plugin to manage a service/resource inside of AWS. It is great that it is open source, and anyone can add features, but a bit of a pain in that you may be stuck waiting on new AWS features to be supported.

Support is mixed. There are not as many examples on how to use Terraform as there are for Cloudformation. You may end up using a search engine to try and find examples for the features that you want to use.

But, it has been designed to treat your infrastructure as code, and so it does offer ways to track your environment, and inputs.

Conclusion

At this point it is to early to say. I am going to have to think about it more, and do a deep dive into both Terraform and CloudFormation.