How to Become a DevOps Engineer

DevOps Engineer working

Here we will discuss a DevOps engineer and what are the tasks and responsibilities of a DevOps engineer. First, you need to understand that there are two main parts when creating an application. The Development part is where software developers program the application and test it. Operations part where the application is deployed and maintained on a server. DevOps is a link between the two. Let’s dive into the details to understand the DevOps tasks and which tools are needed to carry out these tasks.

It all starts with the application. The Developer team will program an application with any technology stack, different programming languages, build tools, etc. They will of course have a code repository to work on the code in a team. One of the most popular ones today is Git. Now you as a DevOps engineer will not be programming the application. But you need to understand the concepts of how developers work and which Git workflow they’re using. Also, need a strong knowledge of how the application is configured. It will help to connect to other services or databases as well as concepts of automated testing and so on.

Now that application needs to be deployed on a server. So, that eventually users can access it. That’s why we’re developing it. So, we need some kind of infrastructure on-premise servers or cloud servers. These servers need to be created and configured to run our application. Again, you are responsible as a DevOps engineer for preparing the infrastructure to run the application.

Since most of the servers where applications are running are Linux servers you need knowledge of Linux. You need to be comfortable using the command line interface. Because you will be doing most of the stuff on the server using a command-line interface. So, knowing basic Linux commands, and understanding the Linux file system is a must. Also need basics of how to administer a server, how to SSH into the server, and so on. You also need to know the basics of networking and security. For example, to configure firewalls, to secure the application. Also need to open some ports to make the application accessible from the outside. As well as understand how IP addresses ports and DNS work.

However, we can draw a line here between IT operations and DevOps. You don’t have to have an advanced super operating system or networking and security skills in DevOps. You need to be able to administer the servers from start to finish. There are professions like network and system administrators, so on that really specialize in one of these areas. So your job is to understand the concepts and know all this. Only to the extent that you’re able to prepare the server to run your application. But not to completely take over managing the servers and whole infrastructure.

Nowadays as containers have become the new standard you will probably be running your application as containers on a server. This means you need to generally understand concepts of virtualization and containers. Also, be able to manage containerized applications on a server. One of the most popular container technologies today is Docker. So, you definitely need to learn it. So now we have developers who are creating new features and bug fixes on one side. And we have infrastructure or servers which are managed and configured to run this application. The question now is how to get these features and bug fixes from the development team to the servers. To make them available to the end-users.

So, how do we release the new application versions basically? That’s where the main tasks and responsibilities of a DevOps Engineer come in. With DevOps, the question is not just how we do this in any possible way. But how do we do this continuously and in an efficient fast, and automated way. So, first of all, need to finish the bug fixes. Then we need to run the tests and package the application as an artifact, like a jar file or zip, etc. So that we can deploy it. That’s where build tools and package manager tools come in.

Some of the examples are Maven and Gradle for java applications. For example NPM for JavaScript applications. So, you need to understand how this process of packaging testing applications works. As I mentioned containers are being adopted by more and more companies. As a new standard, so you will probably be building Docker images from your application. For the next step, this image must be saved somewhere right in an image repository. So, the Docker artifact repository on Nexus or Docker Hub, etc. will be used here. So, you need to understand how to create and manage artifact repositories as well.

Of course, you don’t want to do any of this manually. Instead, you want one pipeline that does all of these in sequential steps. So, you need to build automation and one of the most popular build automation tools is Jenkins. Of course, you need to connect this pipeline with the Git repository to get the code. So this is part of the continuous integration process where code changes from the code repository get continuously tested. You want to deploy that new feature or bug fix to the server after it’s tested, built, and packaged. Which is part of the continuous deployment process. Where code changes get deployed continuously on a deployment server.

There could be some additional steps in this pipeline. Like sending a notification to the team about the pipeline state or handling failed deployment etc. But this flow represents the core of the CICD pipeline. The CICD pipeline happens to be at the heart of the DevOps tasks and responsibilities. So as a DevOps engineer, you should be able to configure the complete CICD pipeline for your application. That pipeline should be continuous. That’s why the unofficial logo of DevOps is an infinite cycle because the application improvement is infinite.

Deploy new features and bug fixes that get added all the time. Now let’s go back to the infrastructure where our application is running. Nowadays many companies are using virtual infrastructure on the cloud instead of creating and managing their own physical infrastructure. These are infrastructure as a service platforms like AWS, Google Cloud, Azure, Linux, etc. One obvious reason for that is to save costs of setting up your own infrastructure. But these platforms also manage a lot of stuff for you, making it much easier to manage your infrastructure there.

So, for example, using a UI you can create the network of your infrastructure through a service. However, many of these features and services are platform-specific. So, you need to learn them to manage infrastructure there. So if your applications will run on AWS you need to learn the AWS and its services. Now AWS is pretty complex. But again you don’t have to learn all the services that it offers you just need to know those concepts. So that you can deploy and run your specific application on the AWS infrastructure.

Now our application will run as a container because we’re building docker images. And containers need to be managed for smaller applications. Docker-compose or docker swarm is enough to manage them but if you have a lot more containers. Like in the case of big microservices you need a more powerful container orchestration tool to do the job. Most popular of which is Kubernetes. So, you need to understand how Kubernetes works.  Also, be able to administer and manage the cluster as well as deploy applications in it.

Now when you have all these maybe thousands of containers running in Kubernetes on hundreds of servers. How do you track the performance of your individual applications or whether everything runs successfully? Whether your infrastructure has a problem? And what’s more important,? Also, need to know in real-time if your users are experiencing any problems? One of your responsibilities as a DevOps engineer may be to set up monitoring for your running application. The underlying Kubernetes cluster and the servers on which the cluster is running. So you need to know a monitoring tool like Prometheus or Nagios, etc.

Now let’s say this is our production environment. Well in your project you will of course need development and testing or staging environments. As well as to properly test your application before deploying it to production. So, you need that same deployment environment multiple times. Creating and maintaining that infrastructure for one environment already takes a lot of time and is very error-prone. So we don’t want to do it manually three times. As I said before we want to automate as much as possible. So, how do we automate this process by creating the infrastructure as well as configuring it to run your application?

Deploy your application on that configured infrastructure. You can use a combination of two types of infrastructure as code tools. Infrastructure provisioning tools like Terraform for example and configuration management tools like ANSIBLE, CHEF, puppet, etc. So, you as a DevOps engineer should know one of these tools. You need this to make your own work more efficient as well as make your environments more transparent. So you know exactly in which state it is easy to replicate and easy to recover. Since you are closely working with developers and system administrators, you may automate some of the tasks for them. You would most probably need to write scripts. Maybe small applications to automate tasks like doing backup system monitoring tasks, Cron Jobs, network management, and so on.

In order to be able to do that you need to know a scripting language. This could be an operating system-specific scripting language like Bash or PowerShell. Can be what’s even more demanded a more powerful and flexible language like Python, Ruby, or Go link. Which are also operating system independent, again here you just need to learn one of these languages. And Python without a doubt is the most popular and demanded one in today’s DevOps space. Easy to learn easy to read and very flexible. Python has libraries for most of the database operating systems. It tasks as well for different cloud platforms.

Now with these automation tools and languages, you write all of these automation logics as code. Like creating, managing, and configuring infrastructure that’s why the name infrastructure is code. Now you manage your code just like the application code you manage. Also using version control like Git. So at this point, you may be thinking about how many of these tools I need to learn. Do I need to learn multiple tools in each category, also which ones should I learn? Because there are so many of them. You should learn one tool in each category. Learn the most popular and most widely used one. Because once you understand the concepts well building on that knowledge and using an alternative tool will be much easier.

Related Posts

Related Posts

PCNSA and PCNSE

Prepare for your PCNSA and PCNSE Exams with Confidence.

Are you a Security Engineer with experience managing the Palo Alto Networks Next-Generation Firewalls? Did you know that Glassdoor released a certified Security Engineer for Palo Alto Networks that can earn an average of $173K annually? To validate their skills, every Security Engineer managing Palo Alto Networks Firewalls must have certifications.