DevOps and DevSecOps have been hot-button issues in the current landscape of software development. Often thrown around as a buzzword, DevOps represents a shift of responsibility onto developers and out of the hands of dedicated systems administrators.
The smorgasbord of DevOps services currently available to developers is large and sometimes overwhelming. There are licensed products, open-source projects in varying states of alpha, beta, etc. On top of that, security exists as an ever-present concern, and the inclusion of security into DevOps practices (what is known as DevSecOps) adds another wrinkle into the consideration, another layer into the solution, and another set of guiding principles for the developer to learn, understand and implement.
But the adoption of DevOps and DevSecOps doesn’t have to be this daunting. Here at Affirma, we work closely with Microsoft, and the Azure platform becomes a go-to when looking at DevOps solutions on the market. The large suite of services and the level of integration between them give a single developer the power to go from focusing solely on code, to managing the entire lifecycle of that code in a structured, automated, and secure fashion.
This blog details best practices and strategies around the implementation of DevOps and DevSecOps within the Azure ecosystem.
Implementing any DevOps and DevSecOps strategy starts with the code; not with the code itself (to a DevOps engineer that doesn’t really matter), but where that code lives, how it’s accessed, and how it’s managed.
While Azure Repos can serve as the platform for housing code repositories, Microsoft’s recent acquisition of GitHub makes the GitHub platform the number one choice by far. Both GitHub and Azure Repos support Git repositories. Git is a distributed source code repository model designed to support multiple team members collaborating on the same codebase in parallel. The flexible branching capabilities, as well as pull request system and merge tools make Git more desirable for teams then alternatives such as TFVC, which are more rigid and less conducive to collaboration. Git, however, comes with a learning curve. Understanding the distributed model, especially coming from a centralized system such as TFVC, isn’t trivial, but is vital towards adopting a DevOps strategy in a collaborative environment.
Any team or practice developing software should be allowed and encourage to learn Git, migrate any existing TFCV repositories to Git repositories hosted on GitHub (there are tools available for this), and set up a branching strategy conducive to continuous integration and continuous deployment (CI/CD).
Define and Implement CI/CD
CI/CD is a core tenant of DevOps. The manual deployment of code in almost any situation is a potential point of failure. Data, security, server resources; these are all at risk where manual deployments are in play. In many cases, all it takes is a typo; one bad connection string, one environment variable pointed to staging and not production, and a team must go into disaster recovery mode.
Defining and implementing CI/CD for development and integration environments should be one of the team’s first goals when implementing a DevOps strategy.
Define Your Team’s Branching strategy
Continuous integration means that when a set of code or commits arrives as a pre-defined integration branch, the code is merged (integrated) into the existing code. This integration represents the next incremental version of the application. The merge triggers a pre-defined build and suite of tests that should run automatically to both build and validate the new version of the code. Very often, the ‘master’ branch of the repository is used as the integration branch, but a separate ‘development’ or ‘integration’ branch could be used as well. There are several branching strategies, or ‘flows’, available for a team to use, some of the more common being Git flow, GitHub flow, or GitLab flow. They have different strengths and weaknesses, and a team should invest some time upfront on deciding which flow is appropriate for their team members and project.
For whatever flow is chosen, pull requests should always be required to merge code into integration. A pull request is not a direct merge into a branch, but a request to merge code into a branch. This allows the merging process to be gated by policies decided on by the team.
For instance, passing code reviews can be required to allow the pull request to be completed. Whether this is peer or manager review is up to your team, but this kind of policy ensures that multiple sets of eyes are seeing code before it makes its way into integration. Isolating bugs in code is more difficult when integrated into other code and commits.
Pull requests should also be gated by passing all tests and passing the CI build. If a pull request fails tests, it can be passed back to the owning developer of the feature in question automatically. This creates a very rich history of the development of any feature in the system and identifying where bugs or issues are entering the system then becomes easier and more achievable. This also prevents committing any code that may cause a build to fail. Only passing builds should be able to be integrated and therefore deployed.
This strategy puts the onus of responsibility onto the developer. It is now the developer’s job not only to code, but to be mindful of tests, write tests, participate in code reviews, and assist in configuring the repository and the CI build. While it may seem daunting, this sort of up-front work pays dividends down the road, reducing bugs, bad builds, and untested code from making its way out into production.
Speaking more to DevSecOps, a configured continuous integration allows for a great deal of security considerations to be addressed in an automated fashion. GitHub in particular as a platform for hosting a CI solution provides some very strong tools and integrations that bring a team and project closer to the kind of security compliance we all desire.
Secure access to code should be implemented with role-based access and multi-factor authentication.
In reference to roles, only those people who are going to be writing code should be given write or commit permissions to a repository.
Always keep this subset of personnel with access as thin as possible.
The smaller the cross-section of access and permissions, the smaller the vector of attack available. Also, GitHub supports both Azure AD as an authentication provider, and multi-factor authentication.
Always use a trusted authentication provider, and always enable multi-factor authentication.
This point could not be stressed enough. We all understand that having to use multi-factor day in and day out can sometimes seem like an inconvenience, but passwords alone are no longer enough to protect sensitive information in this day and age, and even when keeping credentials out of code, the code itself represents protected intellectual property. Especially as consultants, we must keep security in mind right at the outset of a project.
This shift towards a security-minded approach to DevOps is what DevSecOps is all about. Very often in development, security comes into play near the end of a project, in the form of auditing and testing on production. All the effort comes in on the ‘right’ side of the timeline-leading DevSecOps to also be referred to as a ‘shift-left’ strategy or mentality. To implement this is to shift the addressing of security concerns towards the front of a project, into the hands of developers. This again adds responsibility and accountability into a developer’s role, but there are modern tools readily available to give power to the developer to address these concerns, and this toolset is growing and developing.
GitHub is on the front lines of developing and integrating tools that provide security over codebases. As the developer, configuring and utilizing tools such as this is often the first step towards implementing a DevSecOps strategy in your project or organization.
GitHub features powerful code analysis tools to detect vulnerabilities in code. Non-parameterized and -sanitized database queries, as an example, might be caught and corrected before ever arriving into integration. Advances in machine learning have driven the development of semantic engines such as GitHub’s CodeQL engine. With the widespread use of open-source software and libraries, more and more enterprise applications are dependent on open-source libraries. These are not immune to their own vulnerabilities. Companies such as WhiteSource and Dependabot have created tools that can detect and automatically update vulnerable versions of open-source code, protecting you from vulnerabilities in your dependencies. GitHub has even partnered with Dependabot to bring this sort of open-source protection directly to your GitHub repositories. The code scanning tools can also detect the misuse of secrets in code, such as credentials and connection strings.
It is not all that uncommon for developers to reference variables containing secrets directly in code, or even to hard code these secrets in. This is a major security hole in development. The use of vaults in software development can start from project inception and carry through to production, addressing this common concern of secret management. By storing secrets in a vault, referencing these secrets through vault APIs or clients, and forcing authentication to the vaults, developers can securely work with secrets, developing their app without any need to have these secrets directly accessible. This also can serve not just to protect the app from attackers, but to meet compliance by limiting developer access to what they need, and nothing more. A developer would be able to utilize a production database connection string without ever having to handle it directly. GitHub, Azure, and most of the major players in the market feature some sort of vault for handling secrets in development.
In short, GitHub provides tooling to allow developers to scan and fix vulnerabilities in their code, to monitor and update vulnerable dependencies when they are identified, and to manage and utilize secrets during development in a secure fashion, all automatically and all before a version of the code ever hits a live state.
Deployment of Applications
Beyond security around development and integration, DevOps and DevSecOps considerations extend into the deployment of applications, and the lifecycle of infrastructure and code as applications are released into production. The second half of CI/CD is continuous deployment. The most common scenario for this is an automated deployment of successfully integrated code into an integration or sandbox environment. This puts the application into the hands of QA for further testing, and stakeholders for requirements validation and user acceptance testing (UAT). The responsibility of managing these deployments has previously been in the court of the systems administrators, whose job it was to maintain the infrastructure of physical servers that would house both application runtimes and databases. With the advent and rise of the cloud, platforms such as Azure and AWS have taken away the arduous task of infrastructure management and offloaded it to automated virtualization. As developers, if we need somewhere to deploy our application, we simply provision it, pay for it, and the platform does the heavy lifting of resource allocation, networking, generating endpoints, and all the other little things that make an application server work. This means that it is within the DevOps engineer’s capability (and therefore responsibility) to provide the infrastructure to make the application work, write the script that deploys the application to the appropriate environment, and in the case of continuous deployment, define the automation that triggers the deployment of integrated code, completing the CI/CD pipeline.
Azure provides a service that allows developers to implement CI/CD pipelines and define releases to various environments. Azure Pipelines bridges development and production, driving everything from builds, automated testing, changes to infrastructure, and of course, application deployments. Commits to certain target branches trigger CI builds in Pipelines, running unit tests if available, publishing test results for developers and stakeholders, and on success, pipe the appropriate builds or artifacts to a pre-defined Release pipeline. This could deploy the application, apply changes to database schema, or trigger changes to infrastructure, even extending to provisioning brand-new instances of services as required.
Terraform, an open-source infrastructure-as-code software, allows developers to define cloud architecture through a series of manifest files and scripts. Given a valid definition and resource manager credentials, Terraform can be initiated on Azure Pipelines and will shape the landscape of the Azure instances to match the definition in the Terraform files. A successful Terraform run will ensure that the deployment target needed for a release exists, sized, and configured as needed. From there, the Release pipeline will pull the latest appropriate build (or image in the case of a containerized application), set up whatever necessary runtimes or libraries as needed on the target environment, and deploy the release to said application environment. With automation at all points in the pipeline, a developer can implement CI/CD using Azure Pipeline, automating both the build of integrated code, and deployment of the release, leveraging technology such as Terraform to automatically provision the environment for the deployment. This is another example of the shift of traditional roles from separate personnel into a DevOps-powered software engineer capable of full application lifecycle management.
Of course, the build and release stages of the application lifecycle come with their own security concerns. An automated build may still not be meeting compliance, whether that be HIPAA, PCI, or others. Deployed resources could themselves contain vulnerabilities that expose the application to attacks. Key and secret management continues to be a concern, as different environments define and ingest different sets of secrets. And then, even in production, the development team should be aware of any attempted attacks against the application.
DevSecOps tools and strategies can cover a lot of ground in this area. Azure provides the Azure Policy service, which can apply security policies inside Azure Pipelines, adding both pre- and post-deployment gates, prevent failing builds, or releases from reaching production in a vulnerable state. Azure Policy can also be applied to services such as Azure Kubernetes Service (AKS), a common container runtime for deploying scalable applications in a containerized fashion. Regarding container applications, tools such as WhiteSource have container security functionality to perform checks for vulnerabilities in the built container, as well as container signing to encrypt and protect container images from hijacking. The Azure Pipeline deployment gates can also apply checks to Azure Resource Manager (ARM) templates, as well as checks to deployed resources, applying a layer of security automation to the deployment of resources in Azure.
Azure KeyVault as a service covers secret management in the deployment of applications to Azure resources, injecting secrets into deployed applications at release time, protecting client secrets from external threats and developers. Vaults such as Azure KeyVault also support a bring-your-own-key (BYOK) model, allowing clients and stakeholders to supply their own security keys to a vault that a developer can access, ensuring that at no time does the developer ever have any plain-text secrets in their possession. This is the kind of ‘shift-left’ strategy that DevSecOps proposes, considering security from the very start to deployment of production code.
The key tenant of DevOps is automation. Automation of builds, tests, deployments and releases, all driven by the core task of writing and committing code. Practicing DevOps means shifting a developer’s work, particularly at the front-end of a project, to configuring all of the automation and tools necessary to work with the DevOps toolchain triggering and firing in the background as needed. DevSecOps bring a level of concern over security to the up-front work, building in security tools and automation to the existing DevOps pipeline. More and more, DevOps is making its way into developers’ vocabularies, but it is still absent from many teams and organizations. Every developer that can understand and learn to implement DevOps is a developer empowered to drive the development of the application through the kind of structured lifecycle that lends itself to creating a more durable, maintainable, secure application. But developers must be allowed to both learn and implement these principles.
In practice, developers should be given time to train on common technologies such as Git, GitHub and the various branching strategies available. In choosing and working with Azure and Azure services, developers should be afforded the time and opportunity to train in Azure technologies. Microsoft provides several certification tracks, including a DevOps certification. While not every developer needs to be a DevOps Certified Expert, the free training available provides the knowledge necessary for a developer to begin putting DevOps into practice. And in project development, time needs to be afforded for a team to bring online the DevOps infrastructure that will allow them to move forward with the automation in place, reducing time, effort, and errors moving forward by streamlining the development and release process. Very often in live projects, the hours to stage DevOps aren’t allocated or even considered. DevOps is the afterthought, to say nothing of DevSecOps. To build a strong, cloud-centric, agile team of developers is to bring DevOps into the fold as a key skill, and a key component of the project lifecycle.