INTERTEC BLOG

Our Latest Content is on the FPT Blog

Feel free to browse our existing content below, however, if you're looking for the latest articles, we now post them to FPT Software's blog page

How to Ramp Up Your DevOps Team

August 19, 2020 / by Rafael Alvarez

Studies show that nearly three-quarters of all businesses have some kind of DevOps process in place—and it’s not hard to figure out why. Simply put, DevOps can go a long way towards improving your time-to-market through more stable IT—which can in turn provide you with a competitive advantage in the marketplace.

Businessman working with a Cloud Computing diagram on the new computer interfaceNow, given the stat we quoted above, there’s a good chance that if you’re reading this you already have some kind of DevOps team in place. Pop quiz: how many of the benefits listed above have you actually experienced? We’re guessing that some of you are probably reaping all the rewards of this approach to Operations while others are perhaps getting a mix of benefits and challenges. You might find that you’re able to deploy software more quickly, but you’re actually not getting bugs resolved at a decent rate because it’s not clear who owns them. Conversely, you may have all of your roles and responsibilities sketched out nicely, but you haven’t actually implemented the right tools for the job—meaning that your time-to-market hasn’t measurably improved.

If you’re in the camp that hasn’t yet achieved a perfect DevOps deployment, we hope this piece will be a useful reference for getting things on track.

 

What Is DevOps?

Okay, okay, we know what you’re thinking—you already know what DevOps is. That’s probably true, but different people have different definitions for concepts, and it’s always helpful to start out on the same page. Simply put, DevOps is the evolution of IT operations to include many of the tools and techniques of modern web development, in order to support faster delivery in a stable way. This means that tasks that were once done manually—like building a piece of software and deploying it to your servers—are now done in an automated fashion through code and cloud-based applications.

Though there are any number of factors that contributed to the creation of this function, the name of the game here is speed. As development workflows got faster, and expectations got more and more lofty, maintaining stable IT through manual processes became untenable. Developers were increasingly deploying tactics like continuous integration and continuous delivery (CI/CD) to speed up turnaround times on updates and products, and deployment of those updates and products had to accelerate to keep up.

contact us

Tools vs. Culture

You’ll hear definitions that emphasize the collaborative processes that between IT and development that DevOps teams strive to facilitate, or the continuous feedback, integration, and monitoring that’s meant to go into a successful DevOps cycle—and no doubt those things are important. But on a baseline level it’s helpful to understand this function specifically as an evolution of old-school IT management.

In other words, while DevOps certainly is about culture—it benefits from engagement and feedback beyond the IT department—it has to start with the right tools. Typically, this means rolling out:

  • Cloud servers for eventual deployment
  • Version control tools like Git
  • Tools to package, deploy, and run code (e.g. Docker, Jenkins, etc.)
  • Automated testing and monitoring tools

Crucially, these different elements need to be successfully networked into a cohesive pipeline that actually powers continuous integration and deployment of developer code. The result is that, where IT folks used to specialize in the physical elements of the deployment pipeline, they now have to master the coding and other technical skills that go into maintaining a CI/CD pipeline that developers can rely on. In the following sections, we’ll talk about this pipeline in a little more depth, in order to give you a sense of how to build a foundation for successful deployment.

 

Continuous Integration

First things first: DevOps relies on successfully implementing automation for each stage in the pipeline. This begins with the integration stage—meaning that you need to adopt the right tools and processes to make sure that new code is constantly and automatically validated and absorbed into the right code repo as different developers push new changes. This is essential if your operations infrastructure is going to support an agile team, since the ability to merge code quickly and automatically is key to keeping slowdowns and roadblocks from creeping into the development lifecycle. This is separate from the delivery part of the equation, but it establishes the foundation on which you build out the rest of your pipeline.

At this stage, you might be configuring and integrating something like Jenkins (which automates a number of codebase management tasks and can act as a standalone CI server as needed) into your existing IT environment. This, of course, requires a fair bit of specialized knowledge, which means that for teams that aren’t experienced in these sorts of workflows there are a number of integration pitfalls to watch out for.

 

Continuous Delivery

In order to get new code from integration to actual deployment to production, you need to automatically validate that code and then deploy it to a production environment. This part is, in many ways, the crux of the DevOps equation. Why? Because it actually powers the desired result of this new IT paradigm—namely, the ability of developers to update software products multiple times per day, resulting in the potential not just for much quicker time-to-market for new products and updates, but also much quicker bug fixes and patches. Without a fully-functioning CI/CD pipeline, this is essentially impossible, and you’re back to measuring your time-to-market in weeks or months—meaning that you’re potentially at a real competitive disadvantage.

Obviously, this puts a ton of pressure on Operations and System Support to make sure that all of this automation runs smoothly, so that all of the complex moving parts that comprise the DevOps methodology actually result in code being integrated, built, staged, and deployed as expected—without any big disruptions. It’s crucial that no new systems of technologies jeopardize overall stability.

The pressure to get this right is no joke. A recent survey found that a staggering 100% of respondents didn’t deploy more often because of a lack of automation. While the vast majority of companies are employing some kind of DevOps, ostensibly to speed time-to-market, we clearly have a long way to go before a fully automated pipeline is actually the norm.

 

Continuous Testing and Monitoring

Above, we tossed two words around fairly casually: “validation” and “stability.” Hopefully this didn’t give anyone the impression that these things were afterthoughts, or that they could be lumped into the larger process of putting cloud-based IT in place. On the contrary, ramping up a successful DevOps team means prioritizing stability and validated code by automating things like stress and load tests, or even functional tests, within the build pipeline. Part of the idea here is obvious to quickly and automatically make sure that new code that’s integrated into the repository isn’t going to break everything—but it’s also a matter of speeding up testing cycles by taking smaller and simpler tasks of QA’s plate, such that they can spend more of their attention on powering through more complex testing flows on a rapid timescale.

Speaking of QA: to keep pace with all of these changes, QA itself needs to undergo a transformation that’s fairly similar to what we’re describing with DevOps—and it’s often up to the DevOps team to support that transformation by creating visibility and transparency between QA and development and making automated workflows and processes available to QA within the larger pipeline. Taken together, rapid automated validation by DevOps and quicker testing turnaround within the context of QA automation serves to increase test accuracy (by taking the likelihood of human error out of the equation for tests themselves) and improve IT stability (by giving everyone a lot more breathing room and, once again, by reducing the chance of human error once everything’s properly configured). In this way, test automation and automated validation become key pieces of the DevOps puzzle.

 

Pitfalls in DevOps Implementation

So far, we’ve given something like the CliffNotes version of DevOps: yes, these represent the basic building blocks, but you’ll need to gain a lot more knowledge before you’re ready to choose the right tools and implement them correctly. Even something as ubiquitous as the cloud can contain hidden complexities, and companies of all shapes and sizes often find that they don't have the in-house knowledge to successfully migrate their standard software without huge disruptions. And that's before we even discuss the cultural and organizational aspects of the process. They say that “there’s no such thing as a junior DevOps engineer,” but what does that mean, exactly? How do you actually build more collaboration into the development process across touchpoints? How can you create alignment effectively between DevOps and your larger corporate goals? And what about measurement—what KPIs do you put in place and how do you track them in a way that actually gives management a clear picture of how things stand and where you need to improve? Simply put, to ramp up DevOps, you first have to ramp up your own knowledge to a considerable extent. This might sound daunting, but luckily there are experienced teams you can turn to for help.

 

Learn More About Intertec’s Software Engineering and Support Services 

Intertec specializes in building and supporting custom software for its diverse clients. Our experienced team of interdisciplinary professionals have experience at all stages of the software development lifecycle. Click here to learn more. Prefer a personal consultation? Go ahead and schedule a meeting with us here 

Tags: Infrastructure, Software Development

Rafael Alvarez

Written by Rafael Alvarez

An agilist at heart, Rafael Alvarez has more than twenty years of experience working in software that is critical to a company's operation in the fields of E-Commerce, Last Mile Logistics, Insurance, Home Mortgage, and K-12 E-Learning. Throughout his career, Rafael has filled the roles of System Developer, Development Leader, Project Manager, QA Manager, and System Architect. He has been working with Agile Methodologies (like Extreme Programming and Scrum) since 1999, and is currently one of the in-house trainers for Scrum Masters.

Contact Us