A pleasant walk through computing

Comment for me? Send an email. I might even update the post!

Going Full Screen: When and Why

The Screen Dilemma
One screen
Two screen
Full screen
Lots-o-Windows screen

OK, I'm no Dr Seuss.

Whether you're working on an 11" ultraportable, or have three 25" 4K monitors, minute-by-minute you are faced with a decision:

Do I run the current application windowed or full-screen?

This may not seem to require much thought. Whatever works, right? But is our decison working for our productivity, or against it?

Lots of people blow up every application to full-screen and switch back and forth in a mad fit of mouse-clicking. Others keep all their apps on screen and fling them around like Tom Cruise in Minority Report. I have some opinions on when one or the other is more effective.

Caveat Alert!
I don't have science to back up all of these opinions. If you do (or evidence against), send me an email!



The rest of this post really rests on two principles:

1. Reduce distractions
2. Reduce task-switching

These are pretty good ideas for lots of things, not just computer use.

Windowed: Reduced Context-Switching

Better for:

  • Research
  • Data Entry from one source to another

The biggest leap in productivity and joy for most people is when they move from one screen to two. Even fifteen years ago, Albert Balcells, the vice-president of CoActive Marketing Group's development department, told me, "The studies show that you'll make back the few hundred dollars you spend on a second screen within three weeks."

What I commonly hear is, "Wow, now I can have one document open on one screen while I'm working on another!" This is a perfect use case. For example, if I'm researching for a blog post, I'll take notes on one screen while zooming around the web browser in the other. When I'm programming, I have to look up lots of stuff. Keeping the editor in front of me, and a browser full of tabs on a separate screen, triples my productivity.

What about three screens?, you ask.

I use three screens, and find it effective. I'll bet many people do the same thing as I do. One screen is dedicated to just a few apps that are always open and visible. For me it's email, journal and time-boxing timer. I have that monitor oriented in portrait mode.

So, yeah, if you can afford it, do it. But don't increase your distractions! Apply the same rules to three monitors as you do to one!

Tip: Minimize, Don't Close
Avoid closing windows when you're temporarily done with them. Learn and anticipate when you'll be going back and forth between applications. If you don't need an application on the screen, minimize it. I've seen lots of users waste lots of time opening Word, opening a document, doing one thing, closing the document, opening another window, then a minute later closing that window and opening Word again. It's exhausting.

Tip: Use Effective Layering
Even with multiple large screens, layering windows on top of one another is inevitable. Do this effectively if you're switching between them. You should always be able to see enough of a window to identify it and open it without completely obscuring some other window.

Medical Reality
Of course, if you have bad eyes, you'll be going full screen more often. But you can still apply the principles.

Full-Screen: Reduced Distractions

Better for:

  • Writing
  • Heads-down programming
  • Long-form reading
  • Image/Video editing

We think we can do creative or cognitively intensive work while listening to music, reading and commenting on social media, checking email, and playing with our phones. We can't. Humans don't multi-task (really, they don't), and are poor at task-switching.

Lecture Alert
You're not an exception. You think you are. I know you do. But you're not. The science on multi-tasking and task-switching is consistent. You shouldn't use the phone in the car either, not even hands-free, because every study for the last twenty-five years shows it doesn't matter, it's just as dangerous as drinking and driving. Stop it. Your life's more important, and so is mine.

When you're writing your grant proposal, or need to get into the programming zone, or focusing on that merchandising layout, go full-screen, minimize any other window that you can, and turn off your distractions. There is science to back this up, so you don't have to trust me, you can look it up. But save yourself some time and just try it.

Going full-screen not only helps focus on the job at hand, but is also valuable because of the number of tools, toolbars, menus, etc. on high-end applications. For example, if you do photo editing, programming, vector drawing, there could be a hundred or more things to click in front of you. Multiple windows aren't the only distraction. You might want to reduce what the apps show, too (I touch on this later).

Mixed Mode: Just Be Careful

Better for:

  • Multi-monitor, especially three

Most of the time, I work in mixed mode. I have my important windows open and arranged so I can reduce task-switching, and I go full-screen when the need arises.

Tip: Don't Have Actively Moving Windows Open
In movies, it's really cool to see all these screens with stuff constantly moving on them. But in heads-down mode, it's just another eye-catcher. Just as you can't help watching the TV when you're on a date at a bar, anything else flickering or moving nearby only adds to your distraction. Either minimize it or, in the case of a web browser, switch to a tab that's static.

Don't tell yourself you can watch The Lost Room while coding a banking data integration. You won't fully enjoy the former, and will screw up the latter.

Apply the Same Principle to Applications

There's a class of applications called "zenware." The idea is that they focus on providing a distraction-free experience. Even if you don't stick with it, try out a few. This may really influence your experience of your everyday apps, and you might find yourself paring down what's on the screen.

Applications' default layouts are often not the most efficient. They're there to show you the application's capabilities, not improve your work flow. Only keep the tools/frames open that you use frequently. Reduce clutter, but also make it easy to open what you need.

Distractions: Thoughts on Wallpaper

This is a sensitive subject, so definitely take it or leave it. Computer desktop wallpaper is one of the few things users can easily personalize. Except in rare situations, I think businesses should let users set whatever wallpaper they want as long as it's work-appropriate. What that means can be vague, but if there's a question there's an opportunity for better communication.

Regardless, I prefer wallpapers that are interesting but also truly fade into the background. My favorites are black-and-white photos that aren't very busy. Try thinking of your computer wallpaper as a canvas, not a photo album.

Do you really want to find out what difference it makes? Switch your wallpaper to plain black, dark grey, or whatever color pleases you (but I wouldn't go super bright). Try that for a week. Then switch back to your other wallpaper. Which is less stressful?

Here's a wallpaper I like. It's a photo I took of Winton Woods in Cincinnati, Ohio.


Distractions: Thoughts on Desktop Icons

I hate them. I especially hate when users have hundreds on their desktop plus a picture of a puppies playing with ribbons.

Unfortunately, the reason so many users use the desktop for storing files and folders is because they've never been taught how to use the Windows Explorer as originally intended. And I don't blame them, because for years Microsoft and Apple have tried to make file management "better" by hiding something everyone can understand if given half a chance.

The computer file system is based on physical file cabinets, which have been around for centuries.

The big problem with icons on the desktop is that, in order to work with them, you have to minimize open applications. That's a huge time sink.

Here's my advice.

  • Don't open applications using desktop shortcuts. Pin them to the task bar, instead, because it's always available. Then delete those shortcuts.
  • On Windows 7 and 10 you can right-click a taskbar app icon to see the most recent files.
  • Organize your files in the Documents folder, then keep the Windows Explorer window open or minimized.
  • If you must keep icons on the desktop, limit yourself to only the most important you're working on right now, no more than a dozen, and put them all on the left or right side. Most people seem to prefer the left. This lets you resize windows so that the icons are still in view.

Distractions: Final Note

Keep other distractions to an absolute minimum. Do you really need to be notified whenever an email arrives? Does your job require following messaging feeds while trying to focus on the task at hand?

In most cases, the answer is "no." Don't let applications interrupt you when they want to. They are not the boss of you. You decide when to check email.

Summing up

As I discovered when writing this post, the real issue isn't whether to run apps full screen or not. It's why, and that boils down to distractions.

Maybe this will help in some way. I hope it'll inspire you to examine how you're working and try some experiments.

Work well!

NuGet PackageReference in Visual Studio

The information is accurate as of this writing, as far as I know.


What's the Change?

Today, a project's NuGet package information is stored in a project-level packages.config folder. The assemblies are stored in a separate packages folder, usually at the solution level. The project's .csproj file contains Reference elements that point to the assemblies.

Packages.config format folder layout and text samples

    <Reference Include="ParseText.dll, Version=, Culture=neutral, processorArchitecture=MSIL">
    <Reference Include="PersistText.dll, Version=, Culture=neutral, processorArchitecture=MSIL">
    <package id="ParseText" version="1.0.4" targetFramework="net45" />

The PackageReference format moves the package information out of packages.config into the .csproj file, and removes the assembly references. The packages folder is removed, as well, in favor of a user-profile folder found at %userprofile%\.nuget\packages.

PackageReference format folder layout and text samples

    <PackageReference Include="ParseText">

When the project builds, Visual Studio finds the packages in the expected location and copies the assembly dependencies to the bin folder.

What Are the Advantages?

  • Binaries that can be restored are very hard to accidentally include in source control.
  • Only the package information is shown in References, which is usually what the developer wants to see.
  • Package restore is faster because files aren't copied to a solution folder. Continuous Integration benefits, as well, by having just one well-known package location.
  • Importantly, since the paths to assemblies aren't stored in the .csproj file, version control thrash due to differences between developer environments is eliminated. No more update-package -restore because DLLs can't be found.
  • The PackageReference elemnt allows more flexibity and direct use by MSBuild.
  • For NuGet Package authors, the nuspec information is stored directly in the project file, not in a .nuspect file. Also, Build and Pack tasks are included in MSBuild.

Which Visual Studio Editions and Project Types Does It Work With?

PackageReference is availabe in Visual Studio 2017. As of this writing, per NuGet.org:

Although we’re working to bring the PackageReference goodness to all project types and to make all packages compatible with PackageReference, migration is not presently supported for C++, JavaScript, and ASP.NET (.NET Framework) projects.

Also, some packages capabilities are not fully compatible with PackageReference.

Some examples of scenarios that will not be supported include content folders (we have introduced ContentFiles), XDT transforms, PowerShell scripts i.e. install.ps1 and uninstall.ps1 (only init.ps1 is supported) .

Author Note
Obviously ASP.NET support is important, and projects may depend on packages that aren't compatible. However, its lack doesn't prevent migrating compatible projects.

Is There a Converter?

Yes, there is! It's currently available in the Visual Studio Preview edition. It has some known issues, but seems to work well. See the References for a link to instructions.

Manually Converting to PackageReference.

This works with ASP.NET, too, but is more likely to have problems because of content files (stylesheets, scripts, etc). Need to look into this further.

  1. Backup solution
  2. Open solution
  3. Open a project's packages.config
  4. Open the project's .csproj file
  5. In another text editor, create the PackageReference elements.
    <package id="Newtonsoft.Json" version="10.0.3" targetFramework="net461" />
    	<PackageReference Include="[PackageId]" Version="[PackageVersion]" />
    	<PackageReference Include="Newtonsoft.Json" Version="10.0.3" />
  6. Right-click References > Manage Nuget Packages
  7. Choose the Installed tab.
  8. Uninstall the packages for the project only.
    There could be more cleanup in .csproj, such as deleting extraneous .target references*
  9. Copy the ItemGroup into the project file and save.
  10. Delete packages.config
  11. Open Package Manager Console, select the project, and run (for example)
    Update-Package -ProjectName Sms.Web.Verifications -Reinstall
  12. Build
  13. Run

I reocmmend doing a file/folder diff from the previous version (easy if using version control). This will reveal problems that a build/run may not catch.

Can I Convert With a Script?

I haven't tried a script, yet, but I don't see why not.


Image Attributions

  • By NuGet project team (https://github.com/NuGet/Media) [Apache License 2.0 (http://www.apache.org/licenses/LICENSE-2.0)], via Wikimedia Commons
  • By Microsoft Corporation ([1]) [Public domain], via Wikimedia Commons

Accelerate Book Notes

Work-in-progress! New chapter notes and quotes will be added as I complete reading them!


  • 2018-10-14 Chapters 1-5 released
  • 2018-11-05 Chapters 6-7 added
  • 2018-11-06 Chapter 8 added


This is a chapter-by-chapter collection of notes and quotes for the book Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, Nicole Forsgren PhD, Jez Humble, Gene Kim.

Any quotes from the book in this article are copyrighted. My intent is to provide an overview of what struck me in the material, and to encourage any visitor to purchase and read this important work.

Quick Reference: Capabilities to Drive Improvement


Our research has uncovered 24 key capabilities that drive improvements in software delivery performance. This reference will point you to them in the book.

The capabilities are classified into five categories:

  • Continuous delivery
  • Architecture
  • Product and process
  • Lean management and monitoring
  • Cultural


  1. Version control: Chapter 4
  2. Deployment automation: Chapter 4
  3. Continuous integration: Chapter 4
  4. Trunk-based development: Chapter 4
  5. Test automation: Chapter 4
  6. Test data management: Chapter 4
  7. Shift left on security: Chapter 6
  8. Continuous delivery (CD): Chapter 4

9. Loosely coupled architecture: Chapter 5
10. Empowered teams: Chapter 5

11. Customer feedback: Chapter 8
12. Value stream: Chapter 8
13. Working in small batches: Chapter 8
14. Team experimentation: Chapter 8

15. Change approval processes: Chapter 7
16. Monitoring: Chapter 7
17. Proactive notification: Chapter 13
18. WIP limits: Chapter 7
19. Visualizing work: Chapter 7

20. Westrum organizational culture: Chapter 3
21. Supporting learning: Chapter 10
22. Collaboration among teams: Chapters 3 and 5
23. Job satisfaction: Chapter 10
24. Transformational leadership: Chapter 11



Beginning in late 2013, we embarked on a four-year research journey to investigate what capabilities and practices are important to accelerate the development and delivery of software and, in turn, value to companies.

Chapter 1 - Accelerate


Companies, even big ones, are moving away from big projects, instead using small teams using small development cycles.

Software is at the heart of (most of) these transformations. [I don't personally believe that software is the end-all of business improvement, but it's true that many businesses don't realize how much software drives them.]

Maturity models don't work. Capability models do.

None of these factors predict performance:

  • age and technology used for the application (for example, mainframe “systems of record” vs. greenfield “systems of engagement”)
  • whether operations teams or development teams performed deployments
  • whether a change approval board (CAB) is implemented


DevOps emerged from a small number of organizations facing a wicked problem: how to build secure, resilient, rapidly evolving distributed systems at scale.

A recent Forrester (Stroud et al. 2017) report found that 31% of the industry is not using practices and principles that are widely considered to be necessary for accelerating technology transformations, such as continuous integration and continuous delivery, Lean practices, and a collaborative culture (i.e., capabilities championed by the DevOps movement).

Another Forrester report states that DevOps is accelerating technology, but that organizations often overestimate their progress (Klavens et al. 2017). Furthermore, the report points out that executives are especially prone to overestimating their progress when compared to those who are actually doing the work.

To summarize, in 2017 we found that, when compared to low performers, the high performers have:

  • 46 times more frequent code deployments
  • 440 times faster lead time from commit to deploy
  • 170 times faster mean time to recover from downtime
  • 5 times lower change failure rate (1/5 as likely for a change to fail)

Chapter 2 - Measuring Performance


The academic rigor of this book is exceptionally high.

Three ways to masure performance that don't work: lines of code, velocity, and utilization.

Two characteristics of successful measures: focus on global outcomes, and focus on outcomes not output.

Lead Time is "the time it takes to go from a customer making a request to the request being satisfied." The authors focused on the delivery part of lead time--"the time it takes for work to be implemented, tested, and delivered."

The stats on "the impact of delivery perofrmance on organization performance" shows that software development and IT are not cost centers; they can provide a competitive advantage.

It's important to distinguish between strategic and non-strategic software. Strategic software should be kept in-house. See Simon Wardley and the Wardley mapping method.


There are many frameworks and methodologies that aim to improve the way we build software products and services. We wanted to discover what works and what doesn’t in a scientific way,...

Most of these measurements focus on productivity. In general, they suffer from two drawbacks. First, they focus on outputs rather than outcomes. Second, they focus on individual or local measures rather than team or global ones.

In our search for measures of delivery performance that meet these criteria, we settled on four: delivery lead time, deployment frequency, time to restore service, and change fail rate.

Astonishingly, these results demonstrate that there is no tradeoff between improving performance and achieving higher levels of stability and quality. Rather, high performers do better at all of these measures. This is precisely what the Agile and Lean movements predict,...

It’s worth noting that the ability to take an experimental approach to product development is highly correlated with the technical practices that contribute to continuous delivery.

The fact that software delivery performance matters provides a strong argument against outsourcing the development of software that is strategic to your business, and instead bringing this capability into the core of your organization.

The measurement tools can be used by any organization,...

However, it is essential to use these tools carefully. In organizations with a learning culture, they are incredibly powerful. But “in pathological and bureaucratic organizational cultures, measurement is used as a form of control, and people hide information that challenges existing rules, strategies, and power structures. As Deming said, 'whenever there is fear, you get the wrong numbers'”

Chapter 3 - Measuring and Changing Culture


Not only is culture measurable for the purposes of the book, but the authors learned that DevOps can "influence and improve culture."

Westrum's three characteristics of good information:

  1. Provides the needed answers
  2. Timely
  3. Presented so can be used effectively

A good culture:

  • Requires trust and cooperation
  • Has higher quality decisionpmaking
  • Teams do better job with their people

John Shook, in a video on the Lean Transformation Model, makes it clear that the approach is value-driven approach, the "true north" of any situation. John Shook Explains the Lean Transformation Model - YouTube


Ron Westrum's three kinds of organizational culture

Pathological (power-oriented) organizations are characterized by large amounts of fear and threat. People often hoard information or withhold it for political reasons, or distort it to make themselves look better.
Bureaucratic (rule-oriented) organizations protect departments. Those in the department want to maintain their “turf,” insist on their own rules, and generally do things by the book — their book.
Generative (performance-oriented) organizations focus on the mission. How do we accomplish our goal? Everything is subordinated to good performance, to doing what we are supposed to do.

Table 3.1 Westrum's Typology of Organizational Culture.

Pathological (Power-Oriented) Bureaucratic (Rule-Oriented) Generative (Performance-Oriented)
Low cooperation Modest cooperation High cooperation
Messengers “shot” Messengers neglected Messengers trained
Responsibilities shirked Narrow responsibilities Risks are shared
Bridging discouraged Bridging tolerated Bridging encouraged
Failure leads to scapegoating Failure leads to justice Failure leads to inquiry
Novelty crushed Novelty leads to problems Novelty implemented

Using the Likert-scale questionnaire

To calculate the “score” for each survey response, take the numerical value (1-7) corresponding to the answer to each question and calculate the mean across all questions. Then you can perform statistical analysis on the responses as a whole.

Westrum’s theory posits that organizations with better information flow function more effectively.

Thus, accident investigations that stop at “human error” are not just bad but dangerous. Human error should, instead, be the start of the investigation.

“ what my... experience taught me that was so powerful was that the way to change culture is not to first change how people think, but instead to start by changing how people behave — what they do” --John Shook, leader in lean manufacturing

Our research shows that Lean management, along with a set of other technical practices known collectively as continuous delivery (Humble and Farley 2010), do in fact impact culture,...

Chapter 4 - Technical Practices


CD practices will help improve culture. However, implementing the practices "often requires rethinking everything".

Keeping system and application configuration in version control is more important to delivery performance than keeping code in version control. (But both are important).


Continuous delivery is a set of capabilities that enable us to get changes of all kinds — features, configuration changes, bug fixes, experiments — into production or into the hands of users safely, quickly, and sustainably. There are five key principles at the heart of continuous delivery:

  1. Build Quality In
    Invest in building a culture supported by tools and people where we can detect any issues quickly, so that they can be fixed straight away when they are cheap to detect and resolve.
  2. Work in Small Batches
    By splitting work up into much smaller chunks that deliver measurable business outcomes quickly for a small part of our target market, we get essential feedback on the work we are doing so that we can course correct.
  3. Computers perform repetitive tasks; people solve problems
    One important strategy to reduce the cost of pushing out changes is to take repetitive work that takes a long time, such as regression testing and software deployments, and invest in simplifying and automating this work.
  4. Relentlessly pursue continuous improvement
    The most important characteristic of high-performing teams is that they are never satisfied: they always strive to get better.
  5. Everyone is responsible
    in bureaucratic organizations teams tend to focus on departmental goals rather than organizational goals. Thus, development focuses on throughput, testing on quality, and operations on stability. However, in reality these are all system-level outcomes, and they can only be achieved by close collaboration between everyone involved in the software delivery process. A key objective for management is making the state of these system-level outcomes transparent, working with the rest of the organization to set measurable, achievable, time-bound goals for these outcomes, and then helping their teams work toward them.

A key goal of continuous delivery is changing the economics of the software delivery process so the cost of pushing out individual changes is very low.

In order to implement continuous delivery, we must create the following foundations:

  • Comprehensive configuration management
    It should be possible to provision our environments and build, test, and deploy our software in a fully automated fashion purely from information stored in version control.
  • Continuous integration
    High-performing teams keep branches short-lived (less than one day’s work) and integrate them into trunk/master frequently. Each change triggers a build process that includes running unit tests. If any part of this process fails, developers fix it immediately.
  • Continuous testing
    Automated unit and acceptance tests should be run against every commit to version control to give developers fast feedback on their changes. Developers should be able to run all automated tests on their workstations in order to triage and fix defects.

Implementing continuous delivery means creating multiple feedback loops to ensure that high-quality software gets delivered to users more frequently and more reliably.

Chapter 5 - Architecture


See the quote below, it's important that teams be loosely coupled, not just the architecture. But how does this apply to a small software/IT shop of, say a half dozen employees?

I see autonomy of teams coming up quite a bit, not having to ask for permission to change from someone outside the team, and also that changes don't strongly affect other teams. This is a result of loose-coupling.

So, it's interesting that good information flow is important, but that teams shouldn't need to overly communicate in order to do their work.

The authors make a great point about service-oriented architecture and microservices, that these both can enable the desired outcome of testability, but, they don't guarantee it. The team has to be vigilant about ensuring the ability to independently test services.

A good, loosely-coupled architecture allows not just the software but the teams to scale. Contrary to assumptions, adding employees when there's a proper software/team architecture leads to increasing deployment frequency.

I've been in the camp that wants to reduce or at least make tooling consistent. I think there's still some value in that from a maintainability perspective. However, using the better tool for the job apparently has more value. I don't think the authors are advocating a gung-ho or cavalier approach, though.


We found that high performance is possible with all kinds of systems, provided that systems—and the teams that build and maintain them—are loosely coupled.

We discovered that low performers were more likely to say that the software they were building—or the set of services they had to interact with—was custom software developed by another company (e.g., an outsourcing partner).... In the rest of the cases, there was no significant correlation between system type and delivery performance. We found this surprising: we had expected teams working on packaged software, systems of record, or embedded systems to perform worse, and teams working on systems of engagement and greenfield systems to perform better. The data shows that this is not the case.

This reinforces the importance of focusing on the architectural characteristics, discussed below, rather than the implementation details of your architecture.

Those who agreed with the following statements were more likely to be in the high-performing group:

  • We can do most of our testing without requiring an integrated environment.
  • We can and do deploy or release our application independently of other applications / services it depends on.

The goal is for your architecture to support the ability of teams to get their work done — from design through to deployment — without requiring high-bandwidth communication between teams.

When teams can decide which tools they use, it contributes to software delivery performance and, in turn, to organizational performance.

That said, there is a place for standardization, particularly around the architecture and configuration of infrastructure.

Another finding in our research is that teams that build security into their work also do better at continuous delivery. A key element of this is ensuring that information security teams make preapproved, easy-to-consume libraries, packages, toolchains, and processes available for developers and IT operations to use in their work.

Architects should focus on engineers and outcomes, not tools or technologies.... What tools or technologies you use is irrelevant if the people who must use them hate using them, or if they don’t achieve the outcomes and enable the behaviors we care about. What is important is enabling teams to make changes to their products or services without depending on other teams or systems.

Chapter 6 - Integrating InfoSec Into the Delivery Lifecyle


InfoSec is everyone's responsiblity.

Don't leave security reviews until the end of development. Security is equal to Development and Operations.


many developers are ignorant of common security risks, such as the OWASP Top 10

We found that when teams “shift left” on information security — that is, when they build it into the software delivery process instead of making it a separate phase that happens downstream of the development process — this positively impacts their ability to practice continuous delivery.

  • First, security reviews are conducted for all major features, and this review process is performed in such a way that it doesn’t slow down the development process.
  • the second aspect of this capability: information security should be integrated into the entire software delivery lifecycle from development through operations.
  • Finally, we want to make it easy for developers to do the right thing when it comes to infosec.

We found that high performers were spending 50% less time remediating security issues than low performers.

Rugged DevOps is the combination of DevOps with the Rugged Manifesto.

Chapter 7 - Management Practices for Software


Lean management, as applied to software, currently yields better results.

Four components of Lean management applied to software:

  1. Limit Work in Progress (WIP)
  2. Visual Management
  3. Feedback from Production
  4. Lightweight Change Approvals


  1. Limiting work in progress (WIP), and using these limits to drive process improvement and increase throughput
  2. Creating and maintaining visual displays showing key quality and productivity metrics and the current status of work (including defects), making these visual displays available to both engineers and leaders, and aligning these metrics with operational goals
  3. Using data from application performance and infrastructure monitoring tools to make business decisions on a daily basis

WIP limits on their own do not strongly predict delivery performance. It’s only when they’re combined with the use of visual displays and have a feedback loop from production monitoring tools back to delivery teams or the business that we see a strong effect.

We found that approval only for high-risk changes was not correlated with software delivery performance. Teams that reported no approval process or used peer review achieved higher software delivery performance. Finally, teams that required approval by an external body achieved lower performance.

We found that external approvals were negatively correlated with lead time, deployment frequency, and restore time, and had no correlation with change fail rate. In short, approval by an external body (such as a manager or CAB) simply doesn’t work to increase the stability

Chapter 8 - Product Development


Lean Product Development

  • Work in Small Batches
  • Make Flow of Work Visible
  • Gather and Implement Customer Feedback
  • Team Experimentation


We wanted to test whether these [Lean] practices have a direct impact on organizational performance, measured in terms of productivity, market share, and profitability.

  1. The extent to which teams slice up products and features into small batches that can be completed in less than a week and released frequently, including the use of MVPs (minimum viable products).
  2. Whether teams have a good understanding of the flow of work from the business all the way through to customers, and whether they have visibility into this flow, including the status of products and features.
  3. Whether organizations actively and regularly seek customer feedback and incorporate this feedback into the design of their products.
  4. Whether development teams have the authority to create and change specifications as part of the development process without requiring approval.

It’s worth noting that an experimental approach to product development is highly correlated with the technical practices that contribute to continuous delivery.