Easily enable SSO (Single Sign-On) to protect your AWS Accounts

Enabling SSO on your AWS Accounts is strongly encouraged, adds additional security, access to multiple accounts from a single entry point (with different permission sets), and above all, it is free.

The following is a walk-through of how to enable Single Sign On in the AWS console. These instructions are for people with individual accounts to a few accounts. Actually, these instructions are just a good general starting point for smaller size orgs.

Enable Organizations

  • Log into the AWS Console as the Root account
    • Note: This should be in the primary account that will hold the Organization if you have more than one account. These instructions only go through getting SSO enabled, not adding accounts to an Organization.
  • Go to Organizations (once again another service free of charge)
  • Click “Create organization” and then click “Create organization”
    • Note, this can only be done for one account in the org, and it would be best to do this in a new account if you are going to be adding a large number of other accounts or people.
    • If this is your only account it is not a big deal. Just add it to your base account.
  • Verify your email address to finish the setup process.
    • This is a default. I think it is for security reasons to ensure that not just anyone that has admin access to the account creates an Organization.
    • The email will be sent to the root account email address.

Enable Single Sign On

AWS SSO Landing Page
  • There are two options here. Click the link to “Enable AWS Single Sign On” or search for SSO in the top level search bar.
  • Click “Enable AWS SSO”.
    • Assumptions here:
      • You are going to use AWS as the Identity Source.
      • You have already created the Organization
      • You are happy to be getting this done, and looking forward to being able to access all your accounts from a single location. (Without being charged extra for it). (This sounded much funnier when I was writing it the first time)
Default Landing Page for SSO After Account Creation

Configure Single Sign On

  • Go to Settings.
    • Identity Source — Leave this set to AWS SSO.
      • For me this makes sense. I don’t have a third party solution for this, and I don’t want to have to manage it.
    • User Portal — you can create a custom url. This is nice so that you do not have to attempt to remember the default one that is provided.
    • Multi-factor Authentication — these are my preferred settings.
      • Prompt for MFA – Every time they sign in (always-on)
      • MFA types – for me, either works, but I like to use Authy, because then I can get into it from anywhere. And, if my device breaks, I still have access to my keys.
      • If a user does not yet have a registered MFA device
        • Require them to provide a one-time password sent by email to sign in
        • If you are going to protect your accounts, go all the way with it.
      • Who can manage MFA devices — Users can add their own.

Create Users and Groups

The Create Group Pop-up
  • Next step is to create Groups and Users
    • I like to add the Group first because then I can just add the Users to all the groups when I create them.
    • Users are the people that will have access to the accounts.
    • The groups are just groups of users. The permissions are not tied to there.

Configuring Account Access

Setting up and configuring AWS Single Sign On is almost complete. The last remaining steps are to create Permission Sets and then to assign them to users or groups that are bound to accounts. As a default, I like to create an Admin and ReadOnly permission set. You can attach the users/groups to the accounts first, but I prefer to create them the other way and to create the permission sets, and then attach users/groups to the accounts.

  • Create Permission Set
    • Standard that you can select from a list. These are limited to an hour.
    • Custom. Can still be almost the same, but, one nice thing is that you can change the timeout to up to 12 hours.
Dialog for adding permission set
  • Assign users and groups from the AWS Organization Tab.
    • I only have the 1 account. Click the check box and then click “Assign Users” (it should really read assign users and groups)
    • I like to assign groups and not users. So I have added both to my mix here.
    • Note, you can only add 1 group at a time or they get all the permission sets.
  • On the AWS accounts page. Click on the Name of the account to get an overview of it.

You can now login via SSO

All you have to do now is go to the link that is provided in the configuration. Once you log in, you will get a list of the accounts that you have available to you and the roles that you can assume in each one.

That is really all there is too it. It is quick and easy to setup, and after you start using it, you will wonder why you did not do it sooner. This is a great solution for solo developers, small companies, and even larger companies if you want to get into integration with your own federation servers.

An Introduction to AWS Step Functions – Part 2

In Part 1, I went through a whirlwind tour and left example code on how to create a Lambda function and a Step Function. The thing is, I did not explain how the code works. That is what this article is about. I am going to be using the same code from the previous post, but will be going through the code and explaining it. We shall see how well the formatting works.

Jumping right into it. This is the entire code for the StepFunction from the previous post. As of right now, this code has to be json. I did not find any information about supporting yaml. Here is the documentation provided by AWS. It is pretty thorough, but it could be a bit clearer. My explanation could be better or worse.

{
  "Comment": "A Retry example of the Amazon States Language using an AWS Lambda Function",
  "StartAt": "LambdaFunction",
  "States": {
    "LambdaFunction": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:aws-serverless-repository-hello-w-helloworldpython-1JQ8TEEDUAHCE",
      "ResultPath": "$.taskresult",
      "Retry": [
        {
          "ErrorEquals": ["CustomError"],
          "IntervalSeconds": 1,
          "MaxAttempts": 2,
          "BackoffRate": 2.0
        },
        {
          "ErrorEquals": ["States.TaskFailed"],
          "IntervalSeconds": 30,
          "MaxAttempts": 2,
          "BackoffRate": 2.0
        },
        {
          "ErrorEquals": ["States.ALL"],
          "IntervalSeconds": 5,
          "MaxAttempts": 5,
          "BackoffRate": 2.0
        }
      ],
      "Next": "ChoiceState"
    },
    "ChoiceState": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.taskresult.value1",
          "StringEquals": "value1",
          "Next": "SuccessState"
        },
        {
          "Variable": "$.taskresult.count",
          "NumericLessThan": 5,
          "Next": "LambdaFunction"
        }
      ],
      "Default": "FailState"
    },
    "SuccessState": {
      "Type": "Succeed"
    },
    "FailState": {
      "Type": "Fail",
      "Cause": "Invalid response.",
      "Error": "ErrorA"
    }
  }
}

Looking at this for the first time, it all seems a bit overwhelming. However, after breaking it down, you will see that while you have to be precise with it, Step Functions are fairly easy to read. (Writing them is still a PITA, but that is just what it is) When you get down to it, the Step Function configuration file only contains 3 sections: ‘Comment‘, ‘StartsAt‘, and ‘States‘. That is it.

Let us take a quick look at the the first two mentioned above, Comment and StartsAt. The former of these is just a comment so you know what the purpose of the Step Function is. On the other hand, you could use this for whatever comment you like. Then, there is StartsAt. StartsAt, actually links to a State. And the States section is what contains all the real meat of the Step Function. The value that is used for StartsAt is the name of one of the States.

States

States are where everything really happens when it comes to Step Functions. They are the individual units that when strung together, make the magic happen. When you look at it from this perspective, it sounds ridiculously easy. But, just like everything else, the devil is in the details. The key to understanding States is to know what the various components that make up a State, and the type of states that are available.

Since this is an introduction, and based off of my code here, I am not going to go through all the options. However, there are docs from Amazon on it. The think about it is, it is not simple. Based on the docs, the only required field is Type. Great, but what are the various types? How do I link them together to make a cohesive whole? What are these other options and how do I pass variables from one section to the next and use them over and over again in a loop? This information is available, but it is scatter all over the documentation and you have to piecemeal it all together to figure out what is available.

What are the State types?

  • Pass – used to take input and pass to output. Basically, used to debug step functions, so you know what the heck is happening between States.
  • Task – this is where the work gets done. It can be a Lambda function or any other supported service. This will take your input and from there provide output to another State to be consumed
  • Choice – handles branching in your Step Function by looking at the output from the previous State and sending the Step Function to perform the next decided upon State in the workflow.
  • Wait – allows you to put a pause in your Step Function to wait for an action to finish, like deploying CloudFormation or some other long running task.
  • Succeed – the state machine has finished with success. This one is actually simple and does just what it says.
  • Fail – the end state where all has not gone according to plan. This is the opposite of Succeed, and allows for some reporting on what went wrong.
  • Parallel – run States at the same time. I have not yet worked with this, so I do not have as much information as I would like on this type of State

In the example above, I use only 4 of the States from above, Task, Choice, Succeed, and Fail. Mainly because I wanted to touch on a good bit of the basics without going to far down the rabbit hole. For my example, my Task was a Lambda function. To me this is the most logic way to use step functions, but there are probably many more that I have not considered. I also wanted to employ a loop. This being that I wanted to test flow control, and passing variable to and from Lambda functions. I could have strung the Lambda functions together in a straight line, but I wanted to test out branching as well.

The big factor to remember is that the order of the States does not matter. What matters is linking which State comes next. You could use a single Choice State to manage the flow through the entire Step Function, and based upon the input move onto any number of other States or back to a previous one.

[code]
],
“Next”: “ChoiceState”
},
[/code]

This little block of code is a great example. The Next keyword is used, and the following word is the name of the next State block that is going to be executed. The name of each State is identified by the start of the json template block. In the example above, I am not going to a State of type Choice, but have created a State that is called ChoiceState. In retrospect, much of this could have been done cleaner, but it was an example that I put together rather quickly. This is the declaration:

[code]
},
“ChoiceState”: {
“Type”: “Choice”,
“Choices”: [
[/code]

I am going to end here for today. The next section is going to be about passing data from the Step Function to Lambda, and how to reuse it. But, I want to work on a smaller block of code and focus on just the one thing.

Getting Started with AWS Step Functions Part I

AWS came out with Step Functions a few years ago, and up until recently, I have not had the opportunity to dive in and give them a try. Yes, I could build my own pipeline or state machine, but the idea behind Step Functions is that it does most of the heavy lifting for you. That, and it ties into other AWS services. As such, I decided to dive into getting started, and looked at the demo options and walkthroughs that were available. None of them met my needs, so I rolled my own.

The idea is to see how I can create a Step Function that will run multiple loops, and call a Lambda function multiple times. What I wanted to test was the following:

  • Pass Variables into the Step Function and see how they are handled
  • Call a Lambda function multiple times
  • Create a loop using the Step Function DSL
  • Test output from Lambda and make a decision based upon it
  • Figure out any gotchas and how to trigger Step Functions

Let’s dive in. Now, this is the final result. It took me a few iterations to actually get to this point. Smarter people than I might be able to get it done on one go, but not I.

Lambda Code

I came up with a simple Lambda function written in Python 3.6. All that I wanted to do was to perform a loop with Step Functions, and then get output the values. Simple. And as you can see, this code is pretty simple. It could be streamlined, but it was quick and easy to write.

def lambda_handler(event, context):
    print('value1 = ' + event['key1'])
    print('value2 = ' + event['key2'])
    print('value3 = ' + event['key3'])
    taskresult = event.get('taskresult', None)
    if taskresult is None:
        count = 0
    else:
        count = taskresult.get('count', None)
    
    if count is None:
        count = 0
    else:
        count = count +1
        
    if count < 5:
        output = {
            'count' : count,
            'value1' : 'ThereIsNoSpoon';
        }
    else:
        output = { 'value1' : event['key1'],
                    'value2': event['key2'],
                    'count' : count }
    return output

Now we need to move onto the body of what we are working on, and that would be the Step Function. Step Functions have their own language or domain specific language (DSL) that is used to define the state machine. I wanted more than just a “Hello World” example. The idea was to loop through a step functions. Make sure that I could call it multiple times, and then either go to a success or failed state

AWS Step Function Code

{
  "Comment": "A Retry example of the Amazon States Language using an AWS Lambda Function",
  "StartAt": "LambdaFunction",
  "States": {
    "LambdaFunction": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:aws-serverless-repository-hello-w-helloworldpython-1JQ8TEEDUAHCE",
      "ResultPath": "$.taskresult",
      "Retry": [
        {
          "ErrorEquals": ["CustomError"],
          "IntervalSeconds": 1,
          "MaxAttempts": 2,
          "BackoffRate": 2.0
        },
        {
          "ErrorEquals": ["States.TaskFailed"],
          "IntervalSeconds": 30,
          "MaxAttempts": 2,
          "BackoffRate": 2.0
        },
        {
          "ErrorEquals": ["States.ALL"],
          "IntervalSeconds": 5,
          "MaxAttempts": 5,
          "BackoffRate": 2.0
        }
      ],
      "Next": "ChoiceState"
    },
    "ChoiceState": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.taskresult.value1",
          "StringEquals": "value1",
          "Next": "SuccessState"
        },
        {
          "Variable": "$.taskresult.count",
          "NumericLessThan": 5,
          "Next": "LambdaFunction"
        }
      ],
      "Default": "FailState"
    },
    "SuccessState": {
      "Type": "Succeed"
    },
    "FailState": {
      "Type": "Fail",
      "Cause": "Invalid response.",
      "Error": "ErrorA"
    }
  }
}

The way that this code works is as follows. Everything works around States. So, you have to move from State to State. This is a key concept when it comes to Step Functions. Now, there are multiple State types, but I am not going to go into that now. The key factor is that you will go through and loops if a proper return value is not returned. Looking at it now, it looks like a bunch of gobbledygook. I am going to have to come back and write up how this works later.

This is what the visual representation looks like when viewed in the AWS Step Function page. There is a defined ‘Start’ and ‘Stop’. The other stages match what was named in the previous section. The code works to present a model that you can follow.

The cool think about AWS Step Functions is that they guarantee a run. And in a situation where you need to ensure that the code is run, and you need a guarantee. This is mostly due to the cost that is associated with it. Running Lambda that Triggers on SQS would be cheaper, but not as easy to ensure.

Back on with our stuff now. We are looking at how we execute the AWS Step Function. Now we need to execute it. Right now, I am not going to go into the logic around passing variables around. Needless, you will need to understand that when writing your own, and I am going to have to revisit it.

Execution. A couple of items to note.

  • Each execution has to have a unique name.
    • Note, this will bite you when you are testing, and think about this when executing it via automation.
  • It takes in an action just like Lambda, via json
  • Making a small change in the inputs can cause madness
{
  "key1": "value1",
  "key2": "value2",
  "key3": "value3"
}

The output will also be in json, and you can see the results in the visual display.

{
  "key1": "value1",
  "key2": "value2",
  "key3": "value3",
  "taskresult": {
    "value1": "value1",
    "value2": "value2",
    "count": 5
  }
}

This is what the output looks like.

I will go into more detail on the breakdown of the Step Function in the next post. There is a lot to be covered, and this just scratches at the surface.

Writing Tech Blogs are Hard Work

I once read an article that said more people need to write technical blogs. That the problem with much of the technology field was that people did not write in-depth articles on the stuff that they are doing. And, if more people were able to take the time and post a blog entry here and there we would all be better for it. As nice as that sounds, I have to say that writing technical posts are difficult, time consuming, and can quickly go out of date.

First off, writing in general is not easy, and that is before you get to adding the technical part on top of it. There are a number of brilliant developers, system engineers, and devops people that can create some of the most complex and unique solutions to problems, and yet cannot begin to write the first bit of documentation or narratives to describe what it is that they have done. It is not that they are dumb, they just have not honed the writing skill, or just might not be proficient at it. Writing is like coding, if you don’t use it, you can lose it. Also, it is a skill that can be developed and honed overtime. Myself, my writing skills are rusty after taking a long time off from it, and I am trying to get back into the flow.

There are some people that recommend to become talented at writing, you need to be able to dedicate an hour a day to it, or 5 hours a week. And that is just the writing part. That does not tie in working on the technical elements that are needed to provide content for the audience you are trying to reach. Maybe a few years back when I did not have family obligations this would have been possible, but now I have to sneak in time here and there. And, for the casual tech blogger, this is not going to be the case. Unless you are doing a lot of writing for work, there is little time to develop your writing skills. This leads many people to turn away from writing a technical article even though they might have some of the best ideas, if only they could get them into a usable format.

Now, the next big item on why it is so hard to write technical articles is that gathering together the technical information is not easy. Don’t get me wrong, that is not to say that there are not a number of topic that you can write on. That is far from the truth. There are probably thousands of areas and topics that can be written about. But, just like writing, it takes time to gather that information. Now, there are a few high profile bloggers that are able to dedicate their jobs to writing technical blogs. Many of them are evangelist or full time employees whose job it is to talk about certain technical areas. That is great for them, and to honest, I am a bit envious. For the average person devops engineer or develop it is not so easy.

There are some companies that will allow you to blog on the work you are doing, but for most people that is not the case. So, on top of your full time job, you then most go and use your own resources and your own time to work on getting together the data, the code, and whatever else, to get together the technical information just to begin writing the article. Once the work has been done to vet the project or the topic that you are looking at writing about, you have to circle back around and figure out how it is that you want to get the information together to put it in a story for others to read. This goes back to item number one, and I have already mentioned how difficult even that first step is. Now we are taking it to a higher level by saying you do not just have to write a story, but that you must structure it around the technical information. 

So, after getting the data together and know what you are going to write about, you have to structure it. There are screen shots that have to be made, code snippets that must be shared, links to technical information that must be included. As easy as all of that sounds, it is much harder than it sounds. Figuring out how to clip an image or not make it be 4 megs in size, or how to get the images aligned so that it all looks correct. Then there is the question of where do you put your code you are going to share. How do you get the numbers showing on code? How do you enable syntax highlighting on the code samples? All of that is not easy. All of that can be overwhelming when you just want to write an article to share some information with other techies.

And the last fact of the matter is that you never know if all your efforts are for naught. You could write a great article, but if people don’t find out about it, then what do you do? Do you keep plugging away and hope people will stumble upon the articles you have written? Nobody wants to do a bunch of work for nothing. So, in recap, writing technical articles is difficult. It can be hard, and it is not always rewarding. I thank all the people that stick with it, but I understand all of those who don’t.

Configuring AWSCLI and Python on Windows

Trying and doing stuff on a Windows 10 machine has become a rather interesting experiment. It started as a place to be able to play video games and have access to a few programs that are not easily available on Linux, to a test of seeing if I could now do all the things on Windows 10 that I could do on Linux.

Turns out, I wanted to test out the Cloud Directory service from AWS. I figured the 2 easiest ways to do this would be via the AWS CLI and Python. It did not occur to me that I had neither of these installed until I opened up Cmder.exe and typed [code light=”true”]aws[/code] and then [code gutter=”false”]python[/code] and both came back with the not found.

Wait, what? Where are my programs!

So, now I need to install and configure both of these. The test is to see how easy or difficult it is to get this setup on this Windows machine. Quick list of the normal steps that I take to install the AWS CLI on most any Linux machine.

  1. Install Python
  2. Configure a virtual environment to hold my cli tools
  3. Use pip to install aws cli
  4. Configure aws cli
  5. Test that it all works

The first step in setting all of this up is to get Python installed on your machine. The AWS cli is based on Python, and as such you need to have python installed in order to use it. Now, there are some that will install the AWS cli to the root of the machine, and use the system’s globally installed Python. Due to having worked on multiple versions of Python at the same time, and projects that use different libraries, I almost always setup an Virtual Environment to run my Python programs and other sundry programs from. This way, I don’t cross contaminate my streams, and have a clearly defined idea about which versions I am using on different projects.

Installing Python

This is a relatively straightforward task. Click on the Python installer that suits your needs, download it, and follow the install prompts. I chose Python 3.6.7 because it is the version that I am already using when running some Lambda programs in AWS, and because there are some new changes in 3.7 that have broken a few other libraries. On big one is ‘async’ and ‘await’ now becoming keywords. Follow the prompts to install Python and the restart your favorite command line tool. I run bash via git, and use cmder.exe as my shell program.

Once you have it installed you should be able to run the following to verify that you have install python on your workspace. python --version This should output ‘Python 3.6.7’ or whichever version of  Python that you installed.

Setting up Virtual Environment and AWS CLI

The next part is to install the virutal environment and to then use that to install the aws cli. This should be able to be done with just a few commands, and then you should be up and running.  First we run python and install the virtual environment. Then we activate the environment and install the AWS cli. It is just a few simple commands, and you should then be up and running.

[code language="bash"]c:\ericv\dev\python -m venv p367
c:\ericv\dev\> p367\Scripts\activate.bat
(p367) c:\ericv\dev\> pip install awscli
(p367) c:\ericv\dev\> aws help[/code]

And bamm! you are done. Now, there is always configuring aws to use params, but that is another issue. But, it took me longer to write this up, than it took me to do the install.  That in of itself is a good thing to know. Now, the question is if I will run into any more problems. But, so far so good.