A pleasant walk through computing

Comment for me? Send an email. I might even update the post!

2021 DORA Explorer - My Highlights from the State of DevOps report

In my opinion, for what we do as our business, the DORA group's work is among the most important for us to understand and use. It's the best resource I know to provide software delivery guidance that's based in evidence, not hearsay and personal opinion.

I recommend downloading and reading the complete PDF from the web site.

2021 Accelerate State of DevOps report addresses burnout, team performance

Important
ALL excerpts below are directly quoted via copy/paste from the linked 2021 DevOps report. Extra emphases are mine. They're what stood out to me, and they won't be what stand out to you!

The 5 Metrics

With seven years of data collection and research, we have developed and validated four metrics that measure software delivery performance. Since 2018, we’ve included a fifth metric to capture operational capabilities.

Note that these metrics focus on system-level outcomes, which helps avoid the common pitfalls of software metrics, such as pitting functions against each other and making local optimizations at the cost of overall outcomes.

Cloud

Respondents who use hybrid or multi-cloud were 1.6 times more likely to exceed their organizational performance targets than those who did not.

Unsurprisingly, respondents who have adopted multiple cloud providers were 1.5 times as more likely to meet or exceed their reliability targets.

For the third time, we find that what really matters is how teams implement their cloud services, not just that they are using cloud technologies. Elite performers were 3.5 times more likely to have met all essential NIST cloud characteristics.

  1. On-demand self-service Consumers can provision computing resources as needed, automatically, without any human interaction required on the part of the provider.
  2. Broad network access Capabilities are widely available and can be accessed through multiple clients such as mobile phones, tablets, laptops, and workstations.
  3. Resource pooling Provider resources are pooled in a multi-tenant model, with physical and virtual resources dynamically assigned and reassigned on-demand. The customer generally has no direct control over the exact location of the provided resources, but can specify location at a higher level of abstraction, such as country, state, or data center.
  4. Rapid elasticity Capabilities can be elastically provisioned and released to rapidly scale outward or inward with demand. Consumer capabilities available for provisioning appear to be unlimited and can be appropriated in any quantity at any time.
  5. Measured service Cloud systems automatically control and optimize resource use by leveraging a metering capability at a level of abstraction appropriate to the type of service, such as storage, processing, bandwidth, and active user accounts. Resource usage can be monitored, controlled, and reported for transparency.

SRE and DevOps

While the DevOps community was emerging at public conferences and conversations, a like-minded movement was forming inside Google: site reliability engineering (SRE). . . . SRE is a learning discipline that prioritizes cross-functional communication and psychological safety, the same values that are at the core of the performance-oriented generative culture typical of elite DevOps teams.

In analyzing the results, we found evidence that teams who excel at these modern operational practices are 1.4 times more likely to report greater SDO performance, and 1.8 times more likely to report better business outcomes.

Typically, individuals with a heavy load of operations tasks are prone to burnout, but SRE has a positive effect. We found that the more a team employs SRE practices, the less likely its members are to experience burnout.

Documentation

This year, we looked at the quality of internal documentation, which is documentation–such as manuals, READMEs, and even code comments–for the services and applications that a team works on. We measured documentation quality by the degree to which the documentation:

  • helps readers accomplish their goals
  • is accurate, up-to-date, and comprehensive
  • is findable, well organized, and clear

We found that about 25% of respondents have good quality documentation, and the impact of this documentation work is clear: teams with higher quality documentation are 2.4 times more likely to see better software delivery and operational (SDO) performance.

Security

[Shift left] and integrate throughout As technology teams continue to accelerate and evolve, so do the quantity and sophistication of security threats. In 2020, more than 22 billion records of confidential personal information or business data were exposed, according to Tenable’s 2020 Threat Landscape Retrospective Report.6 Security can’t be an afterthought or the final step before delivery, it must be integrated throughout the software development process.

Consistent with previous reports, we found that elite performers excel at implementing security practices. This year, elite performers who met or exceeded their reliability targets were twice as likely to have security integrated in their software development process.

Technical DevOps capabilities

Our research shows that organizations who undergo a DevOps transformation by adopting continuous delivery are more likely to have processes that are high quality, low-risk, and cost-effective.

Specifically, we measured the following technical practices:

  • Loosely coupled architecture
  • Trunk-based development
  • Continuous testing
  • Continuous integration
  • Use of open source technologies
  • Monitoring and observability practices
  • Management of database changes
  • Deployment automation

We found that while all of these practices improve continuous delivery, loosely coupled architecture and continuous testing have the greatest impact

Elite performers who meet their reliability targets are 5.8 times more likely to leverage continuous integration. In continuous integration, each commit triggers a build of the software and runs a series of automated tests that provide feedback in a few minutes. With continuous integration, you decrease the manual and often complex coordination needed for a successful integration.

COVID-19

What reduced burnout?

Despite this, we did find a factor that had a large effect on whether or not a team struggled with burnout as a result of working remotely: culture. Teams with a generative team culture, composed of people who felt included and like they belonged on their team, were half as likely to experience burnout during the pandemic. This finding reinforces the importance of prioritizing team and culture. Teams that do better are equipped to weather more challenging periods that put pressure on both the team as well as on individuals.

Culture

Broadly speaking, culture is the inescapable interpersonal undercurrent of every organization. It is anything that influences how employees think, feel, and behave towards the organization and one another. All organizations have their own unique culture, and our findings consistently show that culture is one of the top drivers of organizational and IT performance. Specifically, our analyses indicate that a generative culture–measured using the Westrum organizational culture typology, and people’s sense of belonging and inclusion within the organization– predicts higher software delivery and operational (SDO) performance. For example, we find that elite performers that meet their reliability targets are 2.9 times more likely to have a generative team culture than their low-performing counterparts.

Our results indicate that performance-oriented organizations that value belonging and inclusion are more likely to have lower levels of employee burnout compared to organizations with less positive organizational cultures.

Given the evidence showing how psycho-social factors affect SDO performance and levels of burnout among employees, we recommend that if you’re seeking to go through a successful DevOps transformation, you invest in addressing culture-related issues as part of your transformation efforts.

Azure DevOps Locally-Hosted Build Agent With Global NPM/.NET Tools

BLUF

Terse examples of installing NPM global packages and .NET global tools for use by locally-hosted Azure DevOps build agents. Removes having to install as part of the pipeline YAML, and reduces chances of contention if multiple agents run on same machine.

Basically, install to a folder that's on the PATH.

.NET Tools

The instructions below assume an E: drive. Alter to fit your server.

Once, as Administrator

# Prep global tools directory
$dotnetTools = "E:\dotnet-tools"
New-Item $dotnetTools -ItemType Directory
$path = [Environment]::GetEnvironmentVariable('PATH', 'Machine')
$newpath = $path + ";$dotnetTools"
[Environment]::SetEnvironmentVariable("PATH", $newpath, 'Machine')

To view what's already installed.

$dotnetTools = "E:\dotnet-tools"
dotnet tool list --tool-path $dotnetTools

To install.

$dotnetTools = "E:\dotnet-tools"
dotnet tool install dotnet-ef --version 3.1.5 --tool-path $dotnetTools

NPM

Once, as Administrator
Before doing these steps, confirm the directory isn't already on the machine PATH. Basically, find out where NPM packages, if any, are already installed. Below is the default folder.

# Prep global packages directory
# Verify NodeJs is installed here:
$nodePath = "C:\Program Files\nodejs"
$path = [Environment]::GetEnvironmentVariable('PATH', 'Machine')
$newpath = $path + ";$nodePath"
[Environment]::SetEnvironmentVariable("PATH", $newpath, 'Machine')

To view what's already installed.

$nodePath = "C:\Program Files\nodejs"
npm prefix
npm config set prefix $nodePath
npm list --global --depth=0

To install.

$nodePath = "C:\Program Files\nodejs"
npm prefix
npm config set prefix $nodePath
npm install --global vsts-npm-auth
npm install --global azure-functions-core-tools@3 --unsafe-perm true
npm install --global aurelia-cli@1.3.1
npm install --global @angular/cli@12.0.1
npm install --global nswag

Developing With Project Dependencies - When to Package, When Not To

One last thought first
This article uses the .NET framework and development for its examples. However, the principles described should apply to most languages and frameworks. For example, the issues and approach can be taken with Angular applications.

Contents

Credits

I wrote most of this, so I get a bunch of the credit. But it wouldn't be as good if my colleagues at Clear Measure hadn't reviewed my words and provided thoughtful questions and debate. They especially clarified to me that the message of the document should be positive toward packaging, that packaging is a critical and established technique for creating performant, maintainable, and scalable business applications.

So, thank you Trish Polay and Mike Sigsworth. You rock!

I don't consider this the last word on this subject. My views will evolve, and I want this article to evolve with them. Stay tuned!

Packages Are Good!

This document examines when it makes sense to publish shared dependencies as packages vs maintaining project references. A strong case is made that shared code doesn't have to become packages. However, this shouldn't be taken as affirming shared project references as desirable.

The question is less whether you'll be packaging reusable bits of your code base, and more when. If you're developing a large enterprise application, breaking it into smaller, independently developed, tested, and deployed services has important benefits. At minimum, this means implementing something like an Onion Architecture informed by Domain-Driven Design. For high-scalability, microservices can become a strong choice.

There's little doubt that developing independent--yet related--projects is more difficult than a monolith where all the code is available all at once. But difficulty isn't the only factor to consider. A loosely-coupled architecture that's easier to test, faster to deploy, and more resilient to change, pays for itself many times over, even if daily development takes more time or effort.

Beware coding by expedience. Technical debt is sure to quickly accrue.

The guidance below is when it makes sense to use packages given how your code is today. Maybe you're working in a monolith and packaging isn't a good near-term decision. But don't make that your long-term decision. Really.

Here are some examples of how you can think about it.

We're starting a new code base with shared code. What do we do?
Try putting some shared code in its own repo. Figure out how that works.

Maybe our new code base has code that might be shared, but isn't yet.
That's OK Likewise try putting the shared code in its own repo.

We have shared projects. But when we pull them out, developing the independently becomes a headache. What do we do?
Reexamine the code. Instead of assuming your projects as they are should become packages, try extracting code from those projects into good packages.

As you read below, you may be tempted to stick with your monolith and eschew packages. That's not what I want. I want you to build an environment where you can develop high quality, less complex, loosely-coupled applications quickly. You may not be there today, but that's where you're headed.

In all of the above, you won't get it right the first time. Your code will evolve along with your understanding. You'll need help and find out how other shops do it. Expect that.

The Developer's Essential Desire

Work on all parts of the application at the same time

The Practical Difference Between External and Internal Dependencies

  • External dependency packages are intended for generic use cases
  • Internal dependency projects are for specific applications

Both are dependencies, but who uses them are different. An external package like Json.Net doesn't know who's using it. An internal dependency like CustomerContractService is developed for specific applications.

The Fundamental Conflicts

  • Loose coupling says projects can be developed, tested, and deployed independently.
  • But related projects benefit from developing, testing, and deploying together.

External packages don't have specific parent dependencies, so they can be developed independently and reside in their own repository and solution workspace. Internal applications have shared dependencies that need to be developed simultaneously.

These two development models are at odds.

For external packages,

  • The parent references the package dependency, which is downloaded from a store
  • The package dependency's code isn't changed
  • The package version is changed externally, and the parent chooses when to upgrade

For internal dependencies,

  • The parent references the project dependency, which is in a relative path
  • Changes to the parent and dependency are committed to source control together
  • The package version is changed at the same time for the package and parent

The two critical concerns are:

  1. Internal dependencies have a fixed relative path stored in source control
  2. A parent expects the dependency to exist when submitted to continuous integration

Challenging Assumptions

There are several reasons a development shop may decide to separate dependencies into packages.

  1. Clarify the code base by explicitly decoupling the dependency
  2. Allow greater or easier consumption of the dependency
  3. Easily share reusable code between projects that live in separate repositories and are worked on by independent teams
  4. Allow independent projects to choose when to upgrade to new version of shared code
  5. A belief creating packages is how it "should" be done for separation of concerns, loose coupling, domain-driven design, microserves, etc.

Several of these reasons can be challenged, especially the last. Here are some questions to ask yourself and team.

  • Imagine the dependency is a third arty open source project. How would that affect our application development?
  • If we don't package our dependencies, is our coding easier or harder? Which parts?
  • Does an application have to change the dependency code during feature development? If so, why? If not, how long can the application wait for the package to have the new changes?
  • Is committing a relative path a hinderance? Is it real or reactive? Can our group establish a common structure for shared project dependencies?

Seriously, if you don't need packages, don't create them. You can still have loosely-coupled shared code.

A Simple Heuristic

Not all shared dependencies should be packages. That's a critical message to take away from this article. It's easy to think our development lives would be easier by packaging all our shared dependencies, but that isn't true.

If the dependency is usually developed in lockstep with the parent, don't make it a package.

For examples, these probably aren't good choices for packages

  • A CustomerService assembly
  • An Entity Framework repository assembly

Candidates for internal packages have these qualities

  • Don't change major functionality frequently
  • Are highly backward-compatible
  • Who uses them doesn't have to be known

Here are some examples. Typically these are customized for the business and are application-agnostic.

  • Security libraries. Great choice.
  • Type manipulation libraries
  • Networking libraries
  • Libraries that work across domains

What do I mean by that last one? Let's say you're practicing Domain-Driven Design. You've segregated your domains and created separate repositories per domain. However, you find there are some domain behaviors that apply to all domains and need to be consistent. Those may be a candidate for a package.

Building Blocks

These are the building blocks we're dealing with.

  • Source control
  • Project vs package references
  • Relative paths for project references
  • Automated build dependencies
  • Dependency versioning

A Common Folder Structure

Since we're assuming software development inside an organization, we can take advantage of that by sharing some conventions. For all the solutions below I recommend the same folder layout. It's not the only way, but is consistent and allows a mono-style to be broken up more easily later.

  1. Use this general structure

    |_repos
      |_Repo
        |_docs
        |_src
          |_Project1
            |_Project1.csproj
          |_Project2
            |_Project2.csproj
          |_Solution1.sln
        |_build.ps1
        |_pipeline.yml
    
  2. A repo with shared project references doesn't need (but could have) a solution file. It's name is prepended with an underscore so that shared projects are at the top of the development folder.

    |_repos
      |__SharedRepo
        |_docs
        |src
          |_Project1
          |_Project1
          |_Project2
    

The key to this or other layouts is the predictability of relative paths.

Repositories

As long as we stick to the layout, our project references will resolve correctly.

Here's a example of four Git repositories. Notice how, as long as the convention is followed, the repos don't conflict with each other.

repos
|__SharedRepo1
  |_.git
  |_docs
  |_src
|__SharedRepo2
  |_.git
  |_docs
  |_src
|_AppRepo1
  |_.git
  |_docs
  |_src
    |_Project1
      -> ProjectReference ../../SharedRepo1/src/Project1/Project1.csproj
      -> PackageReference PackageProject1.2.1.5
  |_build.ps1
  |_pipeline.ps1
|_AppRepo2
  |_.git
  |_docs
  |_src
    |_Project1
      -> ProjectReference ../../SharedRepo1/src/Project1/Project1.csproj
      -> ProjectReference ../../SharedRepo2/src/Project3/Project3.csproj
      -> ProjectReference ../../PackageRepo1/src/PackageProject1/PackageProject1.csproj
  |_build.ps1
  |_pipeline.ps1
|_PackageRepo1
  |_.git
  |_docs
  |_src
    |_PackageProject1
    |_PackageProject2
    |_Packages.sln
  |_build.ps1
  |_pipeline.ps1

AppRepo2 Project 1 represents what I'd prefer not to do: reference a NuGet package project. This is discussed later.

Meta-Solution Files

For convenience, developers may want to have their own solution files that combine other solutions. However, most of these should not be stored in source.

If you put them in the root of the development folder (e.g. repos), there's no source control issue. But it may make more sense for them to be in a repository's folder, yet still not version controlled. This is accomplished by adding a line in .gitignore

# Ignore root-level solution files
*.sln
|_repos
  |_Repo
    |_.git
    |_src
      |_Solution1
      |_Solution2
    |_.gitignore
    |_One-Solution.sln
    |_All-Solutions.sln
    |_CommonApps.sln

If there were a few solution files that were convenient to maintain, then a naming convention could be used instead.

# Ignore root-level solution files with this naming
*[Ss]olution.sln

In this case, One-Solution.sln and All-Solutions.sln would be ignored but CommonApps.sln would be tracked by Git.

Simultaneous Development With Packages, If You Must

If you have to develop a package along with its parent, you need to handle a few things.

  • (Temporarily or permanently) set a project reference to the package, OR
  • Publish the package locally for a code-publish-install development loop (don't do this, it's dumb!)
  • Local build, and continuous integration, must either
    • Remove the project reference, then deploy the package before building the parent, OR
    • Maintain the project reference so the parent builds with the new dependency, and simultaneously build/deploy the package

.NET Core project files can have both a PackageReference and ProjectReference entry for the same assembly, but it's not good practice and will lead to headaches.

You should not be bouncing back and forth between project references and package references. Solve the problem that's leading to this anti-pattern

Let's assume you have an application that depends on a packaged project, but you're always updating that dependency in lockstep. The real problem is build and deployment.

  • The parent project has a project reference to the dependency
  • The build script builds the parent app using the project reference, but generates a package for the dependency
  • The deployment server deploys the new package independently of the parent

Here's some PowerShell pseudocode. Your actual implementation will be different!

# MyApp build.ps1

# Run the dependency solution's build script, assuming it's independent
# This should prompt for a new version number, see explanation below
cd ..\..\_SharedRepo\src\CompanySecurityHelpers
\build.ps1

# Build and publish the solution locally, which will build the dependency dll
# with the new version number
cd ..\..\MyRepo\src
dotnet build -o Release MyApp.sln
dotnet publish -o Release --no-build MyApp.sln --output package

# if the dependency's script doesn't create the package, package it here
dotnet pack -o Release --no-build ..\..\_SharedRepo\src\CompanySecurityHelpers --output package

The Versioning Puzzle - In General

  • Versioning is hard to fully automate. I think it's a mistake to try.
  • Fully automating Patch versioning is probably OK, though.
  • Reduce the friction to allowing developers to commit a new version number.

The idea is simple. Most of the time, you only need to increment the version number when you're ready to generate a pull request. At that time, you should be rebasing your changes on top of the latest mainline release, so you should have access to the latest package version.

  1. The local build script updates the package version
  2. The automated build/deployment pipeline fails if the package version is lower than what's in the store

Here's pseudo code for asking the developer if the version should be increased. It's the idea, not production code. This all works best when developers are committing and synchronizing code frequently.

    $currentVersion = GetCurrentVersion
    Write-Message "Current version is $currentVersion"
    $reply = Read-Host -Prompt 'Do you want to increment the version to publish the package? (y/N)'
    $bumpVersion = $reply -eq 'y'    
    if ($bumpVersion) {
        $level = Read-Host -Prompt 'Which level? (major, minor, patch, [pre])'
        # IncrementVersion takes care of updating the .csproj properties
        $newVersion = IncrementVersion($currentVersion)
        Write-Message "New version will be $newVersion"
    }

A PowerShell script could also read the package store, find the latest package, read its version number, and compare to the package being built. This could run locally to catch a problem before the PR, and would definitely run in the CI/CD server.

The Versioning Puzzle - NuGet Packages In Particular

NuGet packages have a history of dependency resolution challenges. It looks like .NET Core has fixed most issues. To be sure, dependency resolution is hard because a top-level .NET project can only have one version of a dll when it builds. So, what should happen in the following environment?

  • MyApp installs and uses Json.Net 9.1.0. It depends on features in that version.
  • MyApp installs CoolAutoConfigurator, which itself depends on Json.Net 8.3.7.
MyApp
|_Json.Net 9.1.0
|_CoolAutoConfigurator
  |_Json.Net 8.3.7

Which version of Json.Net should MyApp include in the bin folder? The latest? What if there were changes that break CoolAutoConfigurator?

Generally, in Core, the latest version will be used. It's up to the developer to deal with a breaking change by coding around it or contacting the package developer to request they upgrade.

In .NET Framework, including a package with a specific assembly version would often cause conflicts that were resolved using special files that would, in effect, say "If the assembly version is this, it's OK to use this other assembly version."

Package maintainers found the better way to manage Framework packages was to use specific package versions, but change assembly versions only on major releases. So, Json.Net package 9.2.3 and 9.1.0 would both generate a dll with assembly version 9.0.0. The job of the package developer was to ensure no breaking changes in major versions.

This matters. .NET Framework and Core resolve which NuGet package to install based on package version, but assemlies (exes and dlls) resolve which other assemblies to use by assembly version, and assemblies with different versions are not equal. If myapp.exe and mylibrary1.dll both use mylibrary2.dll, but expect different assembly versions, the program will fail.

Core improves on Framework by being more lenient in accepting assembly versions.

A further complication is which version numbers appear in the NuGet package store versus which are embedded in the compiled assembly (DLL). Here's a handy map for all you people who want to right click myfile.dll > Properties > Details.

I'm focusing on .NET Core here.

.NET Core .csproj property .csproj value NuGet File Property
Package version 1.0.2-alpha1 1.0.2-alpha1 Product version
Assembly version 1.0.0.0 - -
Assembly file version 1.0.0.123 - File version

Weird, isn't it? The assembly version isn't visible in file properties, but it's critical for our applications and for the Global Assembly Cache (GAC).

Wrap Up

In general, organizations need to avoid making the assumption that all their shared project references should become packages. Instead, they should extract into packages libraries that can be developed independently, and maintain project references to the other shared code.

Importantly, they should then reduce the shared project references by ruthlessly reevaulating and refactoring the code.

Having a convention for code folder layouts makes it much easier to separate code into independent repositories.

Versioning is more difficult when it's highly automated. Try keeping this in the developer's hands, and create safeguards to accidentally deploying the wrong thing1.

Finally, packages do not equal loose coupling, though they can aid in a decoupling effort. Loose coupling has more to do with architecture, modeling, and development practices. An Onion Architecture, Domain-Drive Design, and Test-Driven Devlopment will go a long way to lowering your code's coupling and complexity.


  1. This is one area where I may be more wrong than I want to admit. A better way of thinking might be, "how easy can we make versioning without causing conflict?"