DevOps Consultant vs DevOps Engineer

What is the difference between a DevOps Consultant and a DevOps Engineer? This question recently came up and I wanted to add my thoughts on the topic.

First, lets address the fact that consultants are almost always hired to work on a specific project or to address a specific issue the company is facing. As such, the role of a consultant will probably always be different than that of a regular employee. The regular employee has to deal with the mundane job of ensuring the lights are on, email, ongoing projects, operations, regular development tasks. The list goes on.

In a lot of ways, consultants are free from this type of scenario, and can focus on the one task or 5 tasks at hand. As such, sometimes consultants can get more done than a regular employee that has other responsibilities. Also, and because they are seen as experts in their areas, sometimes, the views of consultants are more highly valued or taken to heart by upper management.

With that out of the way. There is a difference. The article does stress the teaching, part. I am not sure that it really stresses the transformation part as much. A devops consultant, if they are good, will pair up with people at the company and help teach how devops works, and how to successfully transform a company and a group. They are brought in to look at a situation with fresh eyes, and potentially help shift the entire way that a group is doing things.

This does not mean that there is really a difference in skill sets between a devops engineer and a consultant. But if there is one, I would say it is around teaching and the ability to focus. There is also the other part, and that is the business acumen that consultants learn because they tend to have to deal with people from all levels of the company.

On the other hand, consultants, unless they are on a long assignment, may find that while they are making high level changes, do not always have the chance to get their hands dirty. This depends on the company they are working with, and what the job entails. So, a devops engineer, can end up having more time developing solution, whether it be configuring CICD, infrastructure as code, automating application deployments, automating error responses. Because of this, consultants sometimes have to dive deep on side projects to ensure that their skills are staying up to date with what is currently going on. Tongue in cheek, but sometimes it is just the opposite, and they are scrambling to learn the tool that the customer is using so they can help with that project.

The other reason people bring in consultants is because there could be a skills gap. Company A, realized that they are not moving fast enough and need to change. They go to the IT guys and that group says, “Not my job.” In that case, they bring in consultants because it is quicker to hire consultants than it is to hire full time people. And, unless you are just going to run your company with consultants, you should be bringing on full time people to learn from them or to be filling those roles as the consultants leave.

A bit long winded, but I hope it helps. Ping me if you have any other questions. I have been on both sides of the fence.

Oh. and travel. Consultants normally have to travel.

Time to Hack On an Adafruit Trinket

I don’t know much about microcontrollers. Somewhere along the lines, I never learned about them, how they work, or how to wire them together and use them. Up until a few years ago, I did not even know what a bread board was. Heck, I still do not even really know how they work, but I am going to learn.  And to that ends, I am starting with an Adafruit Trinket.

Let me start by saying that I have hooked up an arduino microcontroller, and turned a light off and on, but that was the extent of it. Without having some other sensors I did not see the point. As a result, it ended up collecting dust on a shelf for the past few years. I did have some ideas of some projects that I wanted to do, but I had no idea where to start, or what tools I would need.

Those are all pretty much excuses. In the end, I never took the time to learn the various parts of electronics. Now, I plan on diving into the world of microcontrollers and see what I can create, and if I have any desire to continue on with it. Could be that I finish a single project and decide that there is nothing else I would like to build or tinker with. That is one of the fun things about trying out new projects and activities, you can find what you enjoy and what you do not.

As to how I am going to start, that is simple. I picked up a project to do. Using some items that I already own, I am going to start learning how to program an Adafruit Trinket. Along with this, I got a couple of the light rings that Adafruit also sells. One of my main issues with arduino, is the cost. I don’t want to have to dish out $40 for just the board for every project that I plan on working on. 

So, now it is time to go play, and to learn. At least on this topic. But, it is good not to do the same thing all the time, and to keep things interesting. I will add updates as I continue to work on my project, and maybe include the final product.

Issues with Ubuntu’s Startup Disk Creator for non-debian ISOs

Let me start with the scenario that led me into issues with the Ubuntu Startup Disk Creator. I had been running Ubuntu GNOME, a flavor of Ubuntu that was focused on a mostly vanilla install of Gnome on top of Ubuntu. Well, Ubuntu was finally getting rid of the Unity desktop, so the spin off that I had been using was no longer going to be updated. Fair enough.

I had Ubuntu installed, and I wanted to give Fedora a try (it has been a while), so I just needed to create a bootable USB stick. Normally I would use dd, but Ubuntu has a tool, and I thought sure, let’s try this gui tool. Quick and easy. That led me down the rabbit hole that you see here.

Ubuntu has a page dedicated to the topic. Create a bootable USB stick on Ubuntu. This page walks through using the Startup Disk Creator. It does mention using an Ubuntu ISO image, but what should work for one, should work for most any ISO. At least this is what I thought. It turns out that if you are not using a Debian based distro, then the application will fail silently. It just sits there and does nothing.

At this point, I could have just used dd and been done with it, but I wanted to find out what was going on with the application, and why it was not working. Let me add a quick note and say, I had not tried a Debian based ISO on the application. This was because I wanted to try out the latest Fedora, and had not bothered to pull down another ISO.

Finding the source code for the USB Creator took a bit longer than planned. The code is hosted on LaunchPad which uses Bazaar as its version control system. Having used multiple systems over the years, launchpad felt like a step back in time. Unless I missed something, there is no easy search within a project, the navigation is antiquated, and the look and feels leaves a bit to be desired.

My first thought was to create a bug on the issue. Even if I was going to fix it, I wanted to ensure that the issue was being tracked, and to see if anyone else had submitted a bug on the topic already. I was surprised when I went to the bug page, and found that the package had not been configured for bug reports yet. At this point, I was more frustrated than anything else, and had decided I was going to use dd to create the usb disk, but I wanted find out what the problem in the code was.

Digging through the code, I found that the core of the application is a few python scripts. Nothing wrong there. I am a huge fan of python for a number of reasons. So, I dug into the code and found the issue rather quickly to my surprise.

From https://bazaar.launchpad.net/~usb-creator-hackers/usb-creator/trunk/view/head:/usbcreator/backends/base/backend.py

        if extension == '.iso':
            label = self._is_casper_cd(filename)
            if label:
                self.sources[filename] = {
                    'device' : filename,
                    'size' : os.path.getsize(filename),
                    'label' : label,
                    'type' : misc.SOURCE_ISO,
                }
                if misc.callable(self.source_added_cb):
                    self.source_added_cb(filename)
        elif extension == '.img':
            self.sources[filename] = {
                'device' : filename,
                'size' : os.path.getsize(filename),
                'label' : '',
                'type' : misc.SOURCE_IMG,
            }
            if misc.callable(self.source_added_cb):
                self.source_added_cb(filename)

The issue is on line 42. The application checks to see if the iso file ‘is_casper_cd.’ This check returns `None` if it does not follow this format. A simple exception could have given the end user some sort of idea about what the issue was, but instead, it fails silently.

From https://bazaar.launchpad.net/~usb-creator-hackers/usb-creator/trunk/view/head:/usbcreator/backends/udisks/backend.py

    # Device manipulation functions.
    def _is_casper_cd(self, filename):
        for search in ['/.disk/info', '/.disk/mini-info']:
            cmd = ['isoinfo', '-J', '-i', filename, '-x', search]
            try:
                output = misc.popen(cmd, stderr=None)
                if output:
                    return output
            except misc.USBCreatorProcessException:
                # TODO evand 2009-07-26: Error dialog.
                logging.error('Could not extract .disk/info.')
        return None

The fact that this code fails silently, the code base does not provide for bug features, and that there is no documentation on this bug/feature is problematic. A simple review of this should have caught the problem. Also, as you can see in the previous section, they do log an error when the data can not be extracted, but they do nothing if the file format does not exist.

At a minimum they could have thrown an error. Another option would have been to continue with the disk write, but to skip the label information. Otherwise, give the end user a clue.

In the end I just used good old fashioned `dd`.

user@host$ sudo dd bs=4M if=/path/to/archlinux.iso of=/dev/sdx status=progress oflag=sync

Why go Serverless

The question is not why you would want to go serverless, the question is why would you not go serverless.

Let us think about this for just a second. In a typical environment, you would not only have to think about the flow of the application and how its pieces fit together, but also to the underlying architecture. Now that DevOps is becoming more common, it is the developers that sometimes find themselves having to worry about these issues. Or in a more traditional shop, there is the team that the application is “thrown over the fence” to that has to deal with the underpinnings. In other companies, it is the DevOps group that works with the Dev team on designing and handling the infrastructure. But, no matter how you slice it, the infrastructure is a huge headache that has to be managed.

In a serverless world, you have to ensure that the application runtime that you are programming for, matches your development environment. But for almost all intents and purposes, that is where it ends. When dealing with servers, there are a number of factors that you have to take into account. For a moment, consider all the items that have to considered when dealing with a classic server architecture:

  • Physical or Virtual Hardware
  • Operating Systems
    • Patches
    • Package installation
  • Montioring
    • server uptime
    • server performance
  • Scheduling such as cron.

The list goes on and on. This is just a precursory list to get you thinking about it. But when dealing with servers, there is no magic button to push that takes care of all of your issues. You have to deploy the system, patch it, configure it for the application, install the application, add it to inventory, and add monitoring. All this is time that takes away from the application that you are trying to develop. Oh, and I failed to mention all the security considerations that must also be taken into account. (And yes, you still have to think about security when dealing with serverless).

The normal response to this is that my application can never be serverless. There are some rare occasions where this is true, but in many situations this is just not simply true anymore. With various services from Amazon and Azure, you can make serverless work for a wide variety of applications. My area of specialty is in AWS, and as such, most of my examples will come from there. If you wanted to make a serverless application in AWS you would probably use the following services:

  • Lambda — the serverless code execution platform
  • Cloudwatch — Monitoring is good
  • API Gateway
  • S3 — for hosting the front end static content or powered by a javascript framework
  • DynamoDB
  • CloudFront
  • Cognito

Using theses tools in various ways can allow you to create applications of all sorts. In fact, I believe that tying in other parts of the AWS family of tools would provide you with the ability to create serverless applications in a way that I have never thought of.

But, how do you start using serverless? For this, it is best to look at your current environment, find a task that is short lived and runs on a machine, and then move it. Or, an even better use case would be to find a repetitive task that needs to be automated, and try doing it with Lambda. This will allow you to get your feet wet with something that is already being accomplished while not breaking existing support models. Finding something that is simple is the best way to get a win under your belt, and give you the confidence to move on to continue to expand your use of it.

The big thing to remember is that all this serverless stuff can be used in all areas of the stack. It can be used for application development to application and server support. For example, automating the backups of long lived servers could be done via lambda. This is an infrastructure helper. Converting images into thumbnails and providing them for use to the application, that would be an application specific task. And, that could be one that supports a classic application running on servers.

Do not let the idea of serverless scare you off. Take a moment, see where you could make it fit, and give it a try. Worse case, you don’t like it, and have lost a few cycles. Best case, you have found a new tool that you can add to your quiver.

 

DevOps is more than just about developers

There. It has been said. Take a moment, and think about it.

There is a push in the community to talk about Continuous Integration / Continuous Deployment (CI/CD), and tying this all to the DevOps movement. About what practices are used to get code to market the fastest, and with minimal amount of code defects. And, I will admit, that I agree wholeheartedly, that CI/CD or some sort of pipeline is important to allow software development teams to move fast and innovate. Or, to just roll out code without doing it every 6 months. I would almost argue that CI/CD is more akin to Agile and Scrum methodologies than it is to DevOps.

When it comes to DevOps, there is a side to this equation that is not commonly being addressed, Infrastructure and Operations.

Infrastructure Ops

InfraOps, whatever you want to call it. This is the glue that holds together all the services that applications run on top of. It is the items that people take for granted when it works, but cry foul when things go down. When servers go down at 2 a.m. InfraOps is what keeps the lights on.

It is easy to dismiss this side of the DevOps house. But, lets quickly discuss all the items that we need to cover here.

Infrastructure Operations List

  • System Deployment
    • Operating systems
    • Patching
  • Virtualization (Docker, AWS, Azure)
  • CMDB
  • Monitoring
  • Software Deployment
    • 3rd Part
    • In house
  • Load Balancers
  • Hardware Failover
  • Disaster Recovery
  • Caching Servers
  • Database Systems
  • Development Environments
  • Backups

This is by no means an exhaustive list of all the items that must be handled for a solution to be properly deployed and supported. But, the idea is to bring to light the side of the DevOps world that is often overlooked. It does not matter how good your code is if you don’t have an infrastructure that will stay online and support your applications. Often this is an overlooked and under staffed area that needs more attention.

Automation is king

The only way to ensure that the Infrastructure is able to support ongoing systems, deployments, and updates is with automation. Every time that a machine is built by hand, a configuration file is manually changed, or updates are performed manually, you have introduced risk into your environment. Each of these interactions is potentially a slow down in the pipeline, or worse, a manual change that has never been documented.

There are a number of tools and processes wrapped around handling Infrastructure Automation: Chef, Ansible, cfEgine, Salt. The list goes on. In some places there people have rolled their own. It does not matter as much about the tool, as long as we are moving away from manual intervention to a more dynamic scalable infrastructure. These tools all require an in-depth knowledge of underlying systems as well as the ability to code. But the end goal is DevOps on the infrastructure as well as on the Development side of the house.

There are places where tools are lacking to help support of automation. While new SaaS Solutions are filling some of these gaps, if you need to run your monitoring system in house, many of the solutions that currently exist are not built around dynamic infrastructure and ephemeral machines. The tools need to catch up with the needs of the users. In the meantime, the back end DevOps guys write workarounds to handle situations like this.

Moving Forward

Moving forward, let us try and look at all aspects of DevOps, from development to application deployment, and from hardware to infrastructure and scaling design. There is still much work to be done, but we are moving in the correct direction.