Why go Serverless

The question is not why you would want to go serverless, the question is why would you not go serverless.

Let us think about this for just a second. In a typical environment, you would not only have to think about the flow of the application and how its pieces fit together, but also to the underlying architecture. Now that DevOps is becoming more common, it is the developers that sometimes find themselves having to worry about these issues. Or in a more traditional shop, there is the team that the application is “thrown over the fence” to that has to deal with the underpinnings. In other companies, it is the DevOps group that works with the Dev team on designing and handling the infrastructure. But, no matter how you slice it, the infrastructure is a huge headache that has to be managed.

In a serverless world, you have to ensure that the application runtime that you are programming for, matches your development environment. But for almost all intents and purposes, that is where it ends. When dealing with servers, there are a number of factors that you have to take into account. For a moment, consider all the items that have to considered when dealing with a classic server architecture:

  • Physical or Virtual Hardware
  • Operating Systems
    • Patches
    • Package installation
  • Montioring
    • server uptime
    • server performance
  • Scheduling such as cron.

The list goes on and on. This is just a precursory list to get you thinking about it. But when dealing with servers, there is no magic button to push that takes care of all of your issues. You have to deploy the system, patch it, configure it for the application, install the application, add it to inventory, and add monitoring. All this is time that takes away from the application that you are trying to develop. Oh, and I failed to mention all the security considerations that must also be taken into account. (And yes, you still have to think about security when dealing with serverless).

The normal response to this is that my application can never be serverless. There are some rare occasions where this is true, but in many situations this is just not simply true anymore. With various services from Amazon and Azure, you can make serverless work for a wide variety of applications. My area of specialty is in AWS, and as such, most of my examples will come from there. If you wanted to make a serverless application in AWS you would probably use the following services:

  • Lambda — the serverless code execution platform
  • Cloudwatch — Monitoring is good
  • API Gateway
  • S3 — for hosting the front end static content or powered by a javascript framework
  • DynamoDB
  • CloudFront
  • Cognito

Using theses tools in various ways can allow you to create applications of all sorts. In fact, I believe that tying in other parts of the AWS family of tools would provide you with the ability to create serverless applications in a way that I have never thought of.

But, how do you start using serverless? For this, it is best to look at your current environment, find a task that is short lived and runs on a machine, and then move it. Or, an even better use case would be to find a repetitive task that needs to be automated, and try doing it with Lambda. This will allow you to get your feet wet with something that is already being accomplished while not breaking existing support models. Finding something that is simple is the best way to get a win under your belt, and give you the confidence to move on to continue to expand your use of it.

The big thing to remember is that all this serverless stuff can be used in all areas of the stack. It can be used for application development to application and server support. For example, automating the backups of long lived servers could be done via lambda. This is an infrastructure helper. Converting images into thumbnails and providing them for use to the application, that would be an application specific task. And, that could be one that supports a classic application running on servers.

Do not let the idea of serverless scare you off. Take a moment, see where you could make it fit, and give it a try. Worse case, you don’t like it, and have lost a few cycles. Best case, you have found a new tool that you can add to your quiver.

 

Building a Windows 2012R2 Instance in AWS with Terraform

Terraform is an application by HashiCorp that is designed to treat infrastructure as code. Lately, I have been working with it to begin automation of resources within AWS, and have been quite pleased.

Lets get started with building out a Windows 2012 R2 server with Terraform on AWS.

You are going to need to have the following items configured in AWS in order for this to work, as I am not going to be using Terraform to build out these items.

Since the purpose of this test is not to create a VPC, subnets, security_groups, etc, all of those will need to be created beforehand. You will then need the identifiers from these later, for use in building out the server. I always recommend building out your VPC in its own stack, and never mixing it with others. It is a vital piece of the infrastructure that should be touched as little as possible.

Items Needed for this Demo:

  • IAM Instance Role
  • VPC Security Group ID
  • Subnet ID
  • Key to use for Instance Creation

For this example you are going to need just two files. These files are the variables.tf and main.tf. Technically, you can name them any good old name you want, as long as they end in .tf. I almost think it is more confusing to go with main.tf and variables.tf. As such, I am going to go ahead and change the name of the files. I use Atom for editing my files, and so descriptive filenames are nice.

mkdir stand_alone_windows_2012
cd stand_alone_windows_2012
touch win2012_test_instance.tf
touch win2012_test_variables.tf

Now let us add the file that we are going to need for the variables. (I will admit that there are some values hard coded into the second file, but that is because I have been testing this to get it working.

variable "admin_password" {
  description = "Windows Administrator password to login as."
}

variable "aws_region" {
  description = "AWS region to launch servers."
  default = "us-west-2"
}

# Windows Server 2012 R2 Base
variable "aws_amis" {
  default = {
    us-east-1 = "ami-3f0c4628"
    us-west-2 = "ami-b871aad8"
  }
}

variable "key_name" {
  description = "Name of the SSH keypair to use in AWS."
  default = {
    "us-east-1" = "AWS Keypair"
    "us-west-2" = "AWS Keypair"
  }
}

variable "aws_instance_type" {
  default = "m4.large"
}

variable "aws_subnet_id" {
  default = {
    "us-east-1" = "subnet-xxxxxxxx"
    "us-west-2" = "subnet-xxxxxxxx"
  }
}

variable "aws_security_group" {
  default = {
    "us-east-1" = "sg-xxxxxxxx"
    "us-west-2" = "sg-xxxxxxxx"
  }
}

variable "node_name" {
  default = "not_used"
}

You will need to go through this file, and update the variables as needed, and create any resources that you do not happen to have. This would include subnets and vpc based security groups.

The next file is win2012_test_instance.tf. This is what does all the heavy lifting. In my example, I am also installing chef, but that is because I plan on automating my entire infrastructure, not just server creation.

# Specify the provider and access details
provider "aws" {
  region = "${var.aws_region}"
}

data "template_file" "init" {
    /*template = "${file("user_data")}"*/
    template = <
  winrm quickconfig -q & winrm set winrm/config/winrs @{MaxMemoryPerShellMB="300"} & winrm set winrm/config @{MaxTimeoutms="1800000"} & winrm set winrm/config/service @{AllowUnencrypted="true"} & winrm set winrm/config/service/auth @{Basic="true"}


  netsh advfirewall firewall add rule name="WinRM in" protocol=TCP dir=in profile=any localport=5985 remoteip=any localip=any action=allow
  $admin = [ADSI]("WinNT://./administrator, user")
  $admin.SetPassword("${var.admin_password}")
  iwr -useb https://omnitruck.chef.io/install.ps1 | iex; install -project chefdk -channel stable -version 0.16.28

EOF

    vars {
      admin_password = "${var.admin_password}"
    }
}

resource "aws_instance" "win2012_instance" {
  connection {
    type = "winrm"
    user = "Administrator"
    password = "${var.admin_password}"
  }
  instance_type = "${var.aws_instance_type}"
  ami = "${lookup(var.aws_amis, var.aws_region)}"
  key_name = "${var.key_name}"
  tags {
    Name = "MY_DYNAMIC_STATIC_NAME",
    Env = "TEST"
  }
  key_name = "${lookup(var.key_name, var.aws_region)}"
  iam_instance_profile = "STATIC_ROLE_NAME_SHOULD_BE_A_VARIABLE"
  tenancy = "dedicated"
  subnet_id = "${lookup(var.aws_subnet_id, var.aws_region)}"
  vpc_security_group_ids = ["${lookup(var.aws_security_group, var.aws_region)}"]
  /*user_data = "${file("user_data")}"*/
  user_data = "${data.template_file.init.rendered}"
}

Terraform vs Cloudformation — First Thoughts

Building out AWS infrastructures by hand is not something that should be taken on lightly. Just as in building out a Data Center or service in any cloud environment (Google, VMWare, etc), building out a bunch of systems by hand is a good way to inherit a bunch of technical debt and a headache or three. On top of that, it is slow and cumbersome to build out servers in a manual fashion, and since it is done manually, it is easy to make mistakes and introduce snowflakes into your environment. Instead, we need to address the build out with automation tooling.

There are a number of options to build out an infrastructure in AWS. As such, this is not to say that Terraform and Cloudformation are the only options. However, they are two options that have decent support, are mature, and are used by more than one person. Another option is to roll your own solution going straight against the AWS api in any number of languages. That however, is potentially more trouble than it is worth. Maybe later we can discuss ways to build out and probe an environment with your own tools, but for now let us stick to talking about Terraform and CloudFormation.

What are CloudFormation and Terraform? 

CloudFormation is a tool written by Amazon Web Services as a way to create and control a collection of resources within AWS. It is under continuous development and improvement by the AWS team, for use on AWS.

Terraform is a product produced by Hashicorp. The goal is a tool that is designed to treat your infrastructure as code. It is designed to work with multiple cloud providers, and be versioned under source control like any of your code projects.

CloudFormation

I have used cloud formation to build out entire environments, single servers, and for testing. It is a very useful tool for what it does, with a few limitations.

 

CloudFormation is AWS specific. Very shortly after a new service comes out, it is supported via CloudFormation. This means that almost any service you are going to want to use with CloudFormation will be available.

There are a plethora of examples on how to use CloudFormation. AWS is great about providing examples, and they do not fall short when it comes to Cloudformation. They have examples on how to use various services, and even how to integrate with Chef and Puppet.

A downside of CloudFormation is that if you don’t manage all your services via CloudFormation you can end up in a state where you cause CloudFormation to get into a hung state. This can happen if you delete a resource that was created in CloudFormation.

Terraform

Terraform has been designed to be cloud agnostic with different providers. This means that if you are in a mixed environment, you can use the same tool to build out your infrastructure on AWS, GCC, or Azure. This is definitely a plus if you are not dedicated to AWS, but this can be a disadvantage if you want to use Terraform to manage a new service.

Because Terraform is not designed specifically for AWS, you may end up in a situation where you will have to write your own plugin to manage a service/resource inside of AWS. It is great that it is open source, and anyone can add features, but a bit of a pain in that you may be stuck waiting on new AWS features to be supported.

Support is mixed. There are not as many examples on how to use Terraform as there are for Cloudformation. You may end up using a search engine to try and find examples for the features that you want to use.

But, it has been designed to treat your infrastructure as code, and so it does offer ways to track your environment, and inputs.

Conclusion

At this point it is to early to say. I am going to have to think about it more, and do a deep dive into both Terraform and CloudFormation.