Tuesday, March 28, 2017

TFS Continuous Integration Walk Through Part 5c - Multiple Solutions: Dependencies

ci-logo This is part of a series of walk throughs exploring CI in TFS, starting from the ground up. The entire series and source code are maintained at this BitBucket repository.

Previous Part: TFS Continuous Integration Walk Through Part 5b - Multiple Solutions: Simple Project References

What I'm Talking About

Businesses often have source code that's years or decades old, and accumulate problems due to how that code was structured into folders. Especially difficult is how dependencies were managed. It's common to see:

  • Projects with project references to other projects in other solutions, sometimes nesting/cascading in a tangled mess.
  • Third party libraries that require installation, such as UI controls.
  • Multiple ways of managing .dll dependencies.

Some common challenges--and reasons why the above happen--are:

  • Multiple projects depend on a shared project, and they often need to step through the shared project's code.
  • Over the years, different developers did things how they liked.
  • Source control wasn't used, or changed.

Where I'll End Up

I'll start with a set of solutions that have some dependency problems. I'll show how they can work with continuous integration. Then, I'll improve the dependency handling.

A Problematic Structure

Let's imagine a TFS repository. Instead of a separate Team Project for each solution, there's a single Team Project named $Main that has all the solutions underneath it.

In this folder structure, I'm showing solution folders with their project folders below. So, ReverseIt is a solution folder with the ReverseIt project folder below it, which is the default Visual Studio layout.

    > Depends on RevEngine (project reference)
    > Depends on jamma.dll
    > Depends on RevEngine (project reference)
    > Depends on NameDb (NuGet package)
  • NameDb DLL returns a list of names, is packaged using NuGet, and stored in a local source.
  • ReverseIt Console reverses whatever text you type in.
  • RevEngine DLL has ReverseText method. It is a project reference.
  • Jamma.dll is a third party security dll. The company is out of business.
  • ReverseNames Console displays a list of reversed names coming from NameDb.

What are the pros and cons of this approach?


  • If you Get Latest on $Main, all the solutions are in their correct relative folders.


  • You often have to get source you don't need.
  • The dependency on relative paths is brittle.
  • You can't use TFS's project management tools effectively.
  • Doesn't scale. What if you had fifty solutions using this approach?

Creating CI Builds As Is

My manager says, "We need to get these projects into TFS Build."

I ask, "Can I restructure TFS?"

He says, "Not yet."

I say, "OK."

Since I'm pretty sure there are dependency problems, the first thing I decide to do is spin up a clean machine, install Visual Studio with no changes, Get Latest on $Main, and try to build all the solutions.

What's this!? Multiple failures? Oh, no! What went wrong?

  1. ReverseNames failed because we're depending on an in-house NuGet package source, and didn't configure that, so the NameDb dependency didn't exist.
  2. RevEngine failed because Barry's the only developer who has ever worked on RevEngine, and only his machine has jamma.dll. It was never checked into source control.

Quite a bit more could go wrong, but you get the idea. Let's fix these with an eye toward our eventual build server.


If I had lots of solutions, I could build all of of them using two files in the root of the folder than has all the solution folders.


<Project ToolsVersion="14.0"
  <!-- -->
    <AllFiles Include=".\**\*.sln"/>
  <Target Name="Default">
    <BuildCmd>@(AllFiles->'&quot;$(MSBuildToolsPath)\msbuild.exe&quot; &quot;%(Identity)&quot; /v:q /fl /flp:errorsonly;logfile=build-logs\%(filename)-log.txt','%0D%0A')</BuildCmd>
    <Exec Command="mkdir build-logs" Condition="!Exists('build-logs')" />
    <Exec Command="$(BuildCmd)" />


--rem path to your latest VS build version
"C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe" BuildAllSolutions.targets

Running the cmd file creates a folder named "build-logs", recursively builds each solution, and outputs each solution's errors. If a solution's log file is not empty, there was a build problem.


Dealing With a Local NuGet Package Source

There are four (technically five or six!) places to store NuGet config files containing package source information, and two ways to configure package source in TFS Build.

NuGet Config File Locations

Let's assume our in-house NuGet source is located at http://ngserver/nuget.

  1. User Profile - Enter it into Visual Studio's settings. This is fine for regular development, but not good for a build server because the build agent service will run as either Local System or a specific user account such as "tfsagent".


You can also manually edit the user profile's nuget.config, which is what the Visual Studio setting dialog is doing. The file is located at %APPDATA%\NuGet\NuGet.config. You add the source under packageSources.

<?xml version="1.0" encoding="utf-8"?>
    <add key="" value="" />
    <add key="" value="" protocolVersion="3" />
    <add key="NuGet Local Source" value="http://ngserver/nuget" />
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
    <add key="skip" value="False" />
  1. Solution - Create a solution-level nuget.config file.

You can create a file named nuget.config, put it in your solution's root and add it to source control. This will determine which NuGet sources the solution uses.

<?xml version="1.0" encoding="utf-8"?>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
    <!-- uncomment clear if you want to ONLY use these sources -->
    <!-- otherwise, these sources are added to your list -->
    <!-- clear /> -->
  <add key="NuGet Local Source" value="http://ngserver/nuget" />

Note: NuGet 3.3 and earlier looked for a config file in a solution's .nuget folder. Not recommended.

  1. Machine-Wide - Create a machine-wide config file.

The machine-wide story is confusing. A machine-wide NuGet config file can reside in one of two folders. The folder changed with the introduction of NuGet 4.0, which is used by Visual Studio 2017.

  • %ProgramData%\NuGet\Config\ (NuGet 3.x or earlier), or
  • %ProgramFiles(x86)%\NuGet\Config\ (NuGet 4.0+)

It can be named anything that ends with .config, including NuGet.config. However, a custom name seems recommended.

For example, I could name the file SoftwareMeadows.Online.config. It would contain the package source like this:

<?xml version="1.0" encoding="utf-8"?>
    <add key="NuGet Local Source" value="http://ngserver/nuget" />

Network Administrators will like this option because they can use it with Group Policies. A policy could target computers in Developers and Build Servers groups and always create the desired config file.

Note: NuGet 4.x does not look for config files in ProgramData.

  1. Default Config

If you're using NuGet 2.7 through 3.x, default sources can also be configured in the file %ProgramData%\NuGet\NuGetDefaults.config. These show up in Visual Studio as local, not machine-wide, sources.

Note: This file and location will not work with NuGet 4.x

  1. Other?

You could also put a nuget.config file in any folder and specify it using the nuget.exe -configFile switch, e.g. nuget restore -configfile c:\my.config.


  • Testing shows that in some cases if the package source URL is the same, only one source key is used. For example, if NuGet.config and NuGetDefaults.config have an identical source URL, the key from NuGet.config is used.
  • It appears a source listed in NuGetDefaults.config cannot be removed using the <clear /> tag. It can only be disabled.

Specifying the Config in TFS Build

Whichever method you use below, ensure the agent service has permissions to the config file. The service name will be something like "VSO Agent ([AgentName])". Microsoft recommends creating a user named "tfsagent". The default is Local Service.


  1. TFS Machine-Wide Path - RECOMMENDED

Personally, for internal development, I'd add the package source to the build server's machine-wide config file and be done with it. So, my path--assuming VS 2015 installed--would be something like: %ProgramData%\NuGet\Config\SoftwareMeadows.Online.config


Remember from above this will change if you install Visual Studio 2017 on the build server (or use NuGet 4.x).

  1. Add nuget.config to the build agent's profile.

Your build server is basically a development machine, with an agent automatically building the software. If you run the service using tfsagent, you could create/edit a nuget.config file found at C:\Users\tfsagent\AppData\Roaming\NuGet.

  1. TFS NuGet Installer Path Field

If you check in a nuget.config file with the solution, enter the path in your build definition's NuGet Installer step. This path is relative to the Path to Solution. I would use this solution if my team didn't all work in the same network, and so needed to use an authenticated NuGet server such as MyGet.


  1. Use the -configFile switch

You could also put a nuget.config file somewhere on the build server (or network?), and use the -configFile switch. Remember the build agent service needs permission to read the file.


Dealing with Barry

Barry's been with us for five years. Barry drinks his coffee black, and lots of it. Barry knows where every file on his machine is, and would prefer you didn't look over his shoulder. Barry has his code, please leave it alone.

Unfortunately, Barry assumes he'll always be here, and hasn't ever tested what would happen if his machine imploded in a fiery death. I go to Barry and say, "Your code doesn't build on a clean checkout." Barry storms over to my computer and starts typing. I observe, take notes, and when I see him copying jamma.dll, ask, "What's that?"

"Oh," he mumbles, "license dll. Forgot about that. Kinda important. RevEngine won't run without it."

I don't say anything, but make a note that, once I have all the software building and deployable, Barry might not be long for our company. In the meantime, there are two ways I can handle this old dependency.

  1. Ensure it's in a folder under the solution or project, reference it there, and add it to source control.


  1. Create the dll as a NuGet package, and add it to my NuGet server.

Download nuget.exe and put in same folder as the dll (or put in its own folder and add to the system path variables).

Open a command prompt, change directories to where the dll is, and run nuget spec jamma.dll.

Edit the resulting damma.dll.nuspec file, change or remove elements as desired.

<?xml version="1.0"?>
<package >
    <authors>Jamma Ltd</authors>
    <owners>Jamma Ltd</owners>
    <description>License file</description>
    <copyright>Copyright 2017</copyright>

Important: Now move the dll into a subfolder named "lib"

Package the dll using nuget pack.

Add the package (jamma.dll and jamma.dll.nuspect) to your NuGet server however is appropriate, might just be copy/paste, or using nuget push commands. See the NuGet documentation.

Remove the reference from the project, and re-add from NuGet. Build and test. If everything's OK, delete the old jamma.dll file and folder.

Which would I do? Number 2, so that all my external dependencies are handled the same way (NuGet).

It's Building, so Add to CI

I test again by deleting and re-getting all the source code, open each and restore any NuGet packages, and build all the solutions. Everything builds, so I'm ready to configure TFS Build for continuous integration.

All my solutions are under the same team project. I'll need to be careful when I create my build definitions that I'm only checking out and building the solutions I want. The key to that is not saving the definition until its Repository and Triggers have been configured.

I'll create the first definition in detail, then just list settings for the remainders.

But first, the NuGet Package Source

If I haven't done it already, I'll add a machine-wide nuget configuration file that has my custom package source.

Create the file %ProgramData%\NuGet\Config\MySource.config with the source definition. In my case, I'm testing with a local NuGet server.

<?xml version="1.0" encoding="utf-8"?>
    <add key="NuGet Test Server" value="http://localhost:7275/" />


Let's start at the top with the simple one, NameDb. In TFS, navigate to the correct collection and the Main team project. Then open the Build tab, start a new build and choose the Visual Studio template.


Create using the repository source Main Team Project.


Delete the Index, Copy and Publish steps. I can add those back if we want them. I don't need the NuGet installer step, either, but I'll configure it as an example since the other projects need it.


Set the NuGet Installer path to the NameDb solution file.


Set the Visual Studio Build path to the NameDb solution file.


Set the Visual Studio Test "Test Assembly" to the solution folder path. This is the physical folder where the agent will have pulled the files from TFS, the same as the LocalPath path we'll choose in Repository. The path definition means "Run any dll with "test" in its name under the build definition name, but not in the obj folders." Since our build definition will be Release, it'll run Release/NameDb.Tests.dll.


Do NOT Save!

Change to the Repository tab. This controls what source code gets checked out by the build. Change the mapped paths. We'll keep the cloaked Drops folder, even though we aren't publishing anything yet.

Notice the LocalPath is set. This is so the contents of the NameDb solution folder are placed under a _Shared\NameDb folder, just like in the repository. It's not strictly needed for this build, but remember how this works if you're building multiple solutions with project dependencies in relative folders.


Change to the Variables tab and ensure the BuildConfiguration is "release".


Change to the Triggers tab. This controls what starts a build. Check Continuous Integration and set the path to the NameDb folder. We only want to build if a file in NameDb changes.


Now Save the definition and give it a good name like "DbName".


Finally, queue the build and see if it succeeds!

Check each step's output for what you expect. Especially, check that tests ran! The test step will success even if it doesn't find tests!



For the next solutions, I'll just list the settings for the build definitions. All of them use the Visual Studio template, and only keep the NuGet, Build and Test steps.



  • NuGet Installer path to solution: $/Main/ReverseIt/ReverseIt.sln
  • Build solution path: $/Main/ReverseIt/ReverseIt.sln
  • Test assembly path: ReverseIt\**\$(BuildConfiguration)\*test*.dll;-:**\obj\**


  • Map Server Path $/Main/ReverseIt to Local Path ReverseIt
  • Map Cloaked Path $/Main/ReverseIt/Drops


  • BuildConfiguration: release


  • Include $/Main/ReverseIt


Strictly speaking, I don't need to build RevEngine. Any changes I make will trigger ReverseIt to build, and if it fails then someone--hopefully I--will be notified. What I do need to do is get the source code into the correct relative folder, and install the NuGet packages. In short, I need to ensure RevEngine can be used by ReverseNames.

So, I'm going to have two sets of steps; one for RevEngine, and one for ReverseNames. It's a brittle definition: Whoever, works on RevEngine needs to know about this build, too, in case something needs to change.


  • NuGet Installer RevEngine

    • Path to solution: ReverseIt\**\*.csproj
    • NuGet Arguments: -PackagesDirectory ..\packages

    Notice I set the path to the local project files. This will restore packages for any .csproj file found. I also explicitly say where to put the packages folder relative to the .csproj file.

  • NuGet Installer ReverseNames

    • Path to solution: $/Main/ReverseNames/ReverseNames.sln
  • Build ReverseNames

    • Solution path: $/Main/ReverseNames/ReverseNames.sln
  • Test ReverseNames

    • Assembly path: ReverseNames\**\$(BuildConfiguration)\*test*.dll;-:**\obj\**

This is critical. I'm telling the build exactly which project folders to pull from ReverseIt, i.e. RevEngine and RevEngine.Tests. This way I don't pull and build ReverseIt.csproj.

If I add a test project later, I'll need to add its mapped path here. Note that I removed the Drops cloaked path since I don't need it.

  • Map Server Path $/Main/ReverseIt/RevEngine to Local Path ReverseIt\RevEngine
  • Map Server Path $/Main/ReverseIt/RevEngine.Tests to Local Path ReverseIt\RevEngine.Tests
  • Map Server Path $/Main/ReverseNames to Local Path ReverseNames


  • BuildConfiguration: release

I'm triggering the build if ReverseNames changes.

  • Include $/Main/ReverseNames

Here are some screenshots of the ReverseNames definition.




Improving the Solutions, Dependencies and Team Projects

What I've done so far works. Sort of. But it's not exactly ideal, especially if there were fifty solutions, not just three. One big thing I lose is the ability to maintain separate project boards and work items per project. To do that, I'd really like a separate Team Project per solution (or in some cases it could be multiple solutions).

The team projects might look like this.


And then there's the project reference. The project reference is bound to cause headaches in the future. One developer will change RevEngine and silently break the ReverseNames build. I say "silently," because it could be something like adding a new unit test project that doesn't get run by ReverseNames because it doesn't get pulled from source control.

Because it's a shared dependency, RevEngine needs to be in its own solution under _Shared and published as a NuGet package.

Right about now, someone's saying, "But but but! I need to be able to step through that code! And make changes that I can test against ReverseIt!!"

This might point to too much coupling between the projects, but so what? That's what you need. For debugging,

If you really need to change code in the context of the solution,

  1. Drop the RevEngine NuGet reference from ReverseNames.
  2. Get the latest RevEngine code into its _Shared\RevEngine folder.
  3. Temporarily add the project reference to ReverseNames.
  4. Do the work.
  5. When finished, drop the RevEngine project reference.
  6. ReAdd the NuGet reference (which doesn't have your changes, sorry).
  7. Open the RevEngine solution and run the tests.
  8. Commit the RevEngine changes, which, sorry, need to be taken through QA, published, etc.
  9. When that's finished, update the RevEngine NuGet package in ReverseNames.
  10. Run the tests, commit, QA, etc.

In other words, you need to treat RevEngine as if it were some third party assembly like Entity Framework or NLog.

All of this leads to...

Key Thinking to Managing Dependencies

  1. Treat your dependencies as if they're third party.
  2. Shared dependencies need to be in their own solutions.
  3. What does it take to check out, build and test the solution on a new computer?
  4. How would you store the project(s) on GitHub or other public remote repository?

The Plan

I'm going to do just three things, but they'll make a big difference.

  1. Reorganize the solutions into discrete team projects
  2. Publish shared project references as NuGet packages
  3. Update projects to use the packages

Before getting started in a production environment, I'd disable all of the affected TFS Build definitions. I don't want anything running if I don't need to.

I would also make a backup copy of all the affected source code, just in case something gets lost.

Reorganizing into Team Projects

First, I'll create my new team projects. Then I'll move my code.

You can also add team projects using the Visual Studio Team Explorer.

  1. Open TFS in the browser, e.g. http://nesbit:8080/tfs. Or, if you know your collection's name, you can open it directly and skip to step 4. (e.g. http://nesbit:8080/tfs/CICollection3/_admin)

  2. Click the upper right hand corner settings "gear" icon to open the Manage Server page.


  3. Select the Collection holding your team projects, and click "View the collection administration page".


  4. Click New Team Project, enter the information and Create.


Repeat to create the four team projects.


We can't move the code using the TFS web application, so

  1. Open Visual Studio

  2. Open Team Explorer and click the plug icon to Manage Connections


  3. Double click the collection you're using to connect to it.


  4. Open Source Control Explorer.


  5. The new team project folders need to be mapped to local folders. This is kind of a pain, but with TFS Version Control there's no getting around it. It's easier with git. I would create a new folder named something like TempTeams to hold the new team projects, finish the moves, then delete all my source code mappings and start over. Like I said, a pain. Be very careful when doing all this that you don't accidentally delete source code from TFS you didn't want to.

  6. To map a team project folder, select it and click the Not Mapped link. Enter the destination folder, and when prompted Get the latest code (there won't be any, that's OK). Map all the team project folders.


  7. Open the NameDb solution. TFS still doesn't natively allow moving multiple files/folders at once, so we need to move the project contents one a time. First, I'll move the solution file. Right-click, select Move, and enter the NameDb team project. The file will be moved to the NameDb team project.



  8. Move the NameDb and NameDb.Tests project folders the same way. You can right-click and move an entire folder, just not multiple folders.

  9. When finished, Commit the changes.

  10. You can now delete the NameDb folder from under $/Main/_Shared and commit that change.

Now I'm going to move just the RevEngine project folders to the new RevEngine team project. Later, I'll create their solution file.

Open the ReverseIt folder. Move the RevEngine and RevEngine.Test folders.



At this point, I move the remaining ReverseIt files/folders to their new team project. Likewise the ReverseNames solution.

Commit the changes. Delete the folders from $/Main, and commit that change, too.

Do NOT delete the $/Main team project! The TFS Build definitions would be deleted, too.!

Finally, go to the RevEngine project in your local working folder, i.e. ..TempTeams\RevEngine\RevEngine, open RevEngine.csproj. This will open the project in a solution, we just haven't saved the solution file yet.

Add the RevEngine.Tests project to the solution.


Now I have to be careful. I select the solution in Solution Explorer. Then, File > Save RevEngine.sln As.


In the Save As dialog, I navigate up one folder, so my solution file is at the root of RevEngine.


Now, I drag and drop the solution file into the Source Control Explorer's RevEngine team project.


Commit the change.

My projects are reorganized, and a couple will build (NameDb and RevEngine). Time to turn handle the RevEngine dependency.

Publish shared project references as NuGet packages

I'm still working in the TempTeams folder. I'll wait until everything's working before going back to my preferred folders.

Creating NuGet packages can be complex. For this walkthrough, I'm showing the simplest thing that works; I'm sure these steps are not ideal. The following assumes I have a local NuGet server at http://localhost:7275/nuget that doesn't require an API key for pushing packages (not recommended), and does allow pushing symbols.

  1. Open the solution, and edit the RevEngine Project Properties > Application > Assembly Information.

  2. Ensure Title, Description, Company and Product are filled in.


  3. Save and build the solution. You must build, because nuget packs the built dll. It does not build the solution for you.

  4. Download the latest recommended NuGet.exe file.

  5. Put nuget.exe in the RevEngine project folder.

  6. Open a command prompt and change directory to the RevEngine project folder.

  7. Run nuget spec to create a RevEngine.nuspec file

  8. Edit RevEngine.nuspec and make these changes:

    <?xml version="1.0"?>
    <package >
  9. Run nuget pack -Symbols to create the regular and symbols package. Remember that, in our case, we want a symbols package so that we can step through the assembly without using a project reference.

  10. Run nuget push *.nupkg -Source http://localhost:7275/api/v2/package. This will push both of the packages.

Update projects to use the packages

This one should be pretty easy. In any solution that has RevEngine as a project reference, remove the project and the reference, then install the NuGet package. Notice that jamma.dll is installed as well, because RevEngine depends on it and the RevEngine project was referencing the jamma.dll NuGet package when it was packages.

After updating, if I open ReverseIt (for example), put a breakpoint on this line,


then run the program, I can step into RevEngine.TextUtilities.cs, which is now part of the debugging symbols.

Update TFS Build Definitions

It's time to get our builds working again!

TFS doesn't natively support copying/moving build definitions. One solution is to write code using the TFS web API to clone definitions:

However, there's a TFS extension for this, which really saves the day. You can download it here.

Export/Import Build Definitions

If using TFS 2015, you must use version v0.0.2. Later versions only work with TFS 2017.

Follow the instructions to install the extension.

To install 'Export/Import Build Definition' (EIBD) on Team Foundation Server 2015 Update 2 and above, perform the following steps:

  1. Navigate to the Team Foundation Server Extensions page on your server. (For example, http://someserver:8080/tfs/_gallery/manage)
  2. Click Upload new extension and select the file you have just downloaded.
  3. After the extension has successfully uploaded, click Install and select the Team Project Collection to install into.

To move NameDb:

  1. In the Main team project Build tab, right-click the build definition and choose Export. Save the json file to a folder such as TfsBuildExports.


  2. Change to the NameDb team project Build tab. EIBD has a known limitation: the Export/Import menu items can only be seen on a build definition name, not the "All build definitions" item. So, if necessary, create an empty definition and save it with a non-conflicting name.

  3. Right-click a definition and choose Import, selecting the .json file.

  4. Edit the imported definition.

  5. Make the following changes.


  • NuGet Installer path: $/NameDb/NameDb.sln
  • Build path: $/NameDb/NameDb.sln
  • Test path: *$(BuildConfiguration)*test.dll;-:*\obj*


  • Map $/NameDb, leave Local Path empty



  • Include $/NameDb
  1. Test!


Use the same approach as NameDb, namely changing the paths in Build, Repository and Triggers. (In fact, ReverseIt would work with the default Visual Studio template.)


Likewise, ReverseNames can be simplified because I no longer have the RevEngine project to deal with. In fact, all I have to do is delete anything related to RevEngine, then update the remaining paths as I've done above.


This is a new build definition, and it follows the same simplified pattern as above.

What Just Happened?

I'll tell you what. Our build definitions got simpler because

  1. We converted our project references to NuGet packages.
  2. We contained our code in team projects.

Admittedly, the sample was a pretty simple case. I could have a team project that legitimately encompasses multiple solutions. But if I still apply the key principles from above, I can have clean maintenance and simpler builds. As a bonus, it should be much easier to switch to git if I want, since I'm now treating my code as discrete instead of monolithic.

Clean Up

I can now delete the $/Main team project. But, despite there being a right-click menu item, I can't do it from Source Control Explorer. So, (sigh), back to the web interface and my collection administration page. Select Delete from the dropdown to the left of the team project.


Am I sure the team project is empty? If so, enter its name and delete it.



Creating a Package
Configuring NuGet Behavior
Using Private NuGet Repository
How to Get TFS2015 Build to Restore from Custom NuGet Source 1
How to Get TFS2015 Build to Restore from Custom NuGet Source 2
NuGet Package Source Config Locations Introducing NuGet 4 Specifying NuGet Config path for TFS Build

Next Part: TFS Continuous Integration Walk Through Part 5d - Multiple Solutions: Build Settings

Thursday, March 23, 2017

Sane Database Schema Conventions


These are sane conventions for constructing and naming a database schema. They aren't new, and there's sure to be something someone doesn't like. They are biased toward the .Net EntityFramework, which itself was influenced by the Ruby on Rails ActiveRecord conventions by David Heinemeier Hansson.

Using these conventions makes it easier to translate the tables into classes. While this isn't always desirable (or correct), it often is.


This sample schema exemplifies the conventions, and shows most relationships you'll encounter. It's semi-realistic. PropertyRecords is contrived to show a pseudo one-to-one relationship. (Note: MS SQL doesn't allow a true one-to-one structure, as different tables' rows can't be simultaneously created.)



  • A Customer has an Initial contact Employee, Support Employee and a Salesperson. An Employee can service multiple Customers.
  • An Employee can be a Salesperson.
  • A Customer has one or more Addresses, an Address belongs to one Customer.
  • An Address has one Property Record, a Property Record is for one Address.
  • A Customer has zero or more Orders, an Order has one Customer.
  • An Order can have many Vendors, a Vendor can fulfill many Orders.
  • An Order can have many Promotions, a Promotion is for zero or more Orders.
  • An Order Promotion has zero or more customer notifications.

Table Column Layout

I like my table columns ordered this way.

  1. Primary key
  2. Foreign keys
  3. Regular columns
  4. Audit columns

General Names

Pluralize Most Table Names

Pluralizing table names reduces the chances of keyword conflicts, and matches how the table will (typically) be treated in an ORM tool.

Yes            |No               
Customers      |Customer
Salespeople    |Salesperson

Don't pluralize many-to-many join tables. By convention, keep the table name parts alphabetical.

Using Code First, Entity Framework might pluralize this to OrderVendors. Personally, I'd use the FluentAPI to force the table name to the OrderVendor.

Yes            |No               
OrderVendor    |OrdersVendors, OrderVendors, VendorOrder              

Use PascalCase

Tables and columns should be in PascalCase.

Yes            |No               
Customers      |customer
OrderNbr       |orderNbr

No Dashes or Underscores

It's tempting to separate words in either table or column names, but don't. Keeping them PascalCased makes the transition to classes easier and clearer. Also, some databases don't play nicely with underscores, or dashes, depending on how they're used.

Yes            |No               
OrderVendor    |Order_Vendor
OrderNbr       |order-number
CustomerId     |Customer_ID


My personal preference is to end key names with "Id", rather than "ID". It reads just as well, and is consistent with PascalCasing and .Net naming conventions.

Primary Key

Some people prefer a primary key of just "Id", but if you need to run SQL queries (and you will), it's easier to have the table name in the primary key for creating joins and reading the results.

SELECT  c.CustomerId, c.Name, a.AddressId, a.Address1
FROM    Customers c
        JOIN Addresses a on c.CustomerId = a.CustomerId


CustomerId  Name  AddressId Address1
----------  ----  --------- ----------
        23  Ron         402 12 Main St
        47  Eve          11 3 Polo Ave        

Simple Tables
TableName + "Id"

Yes            |No               
CustomerId     |CustomerID, Customer_id, Customer_ID

Many-to-Many Tables
A regular join table doesn't need its own primary key. Just use the other tables' primary keys to form a composite key.

OrderId  (PK, FK)
VendorId (PK, FK)

The resulting classes should have these properties

class Order 
  IEnumerable<Vendor> Vendors

class Vendor
  IEnumerable<Order> Orders

A join table with payload--one that has its own columns and/or will be joined to another table, should have its own primary key of Table1+Table2+Id

OrderPromotionId (PK)
OrderId          (FK)
PromotionId      (FK)

The resulting classes

class Order
  IEnumerable<OrderPromotion> OrderPromotions
  //You can manually add this method to get Promotions
  IEnumerable<Promotion> Promotions
  {get {return OrderPromotions.Select(op => op.Promotion);}}

class Promotion
  IEnumerable<OrderPromotion> Order Promotions

Foreign Key

When possible, use the referenced primary key name. If there are multiple foreign keys to the same table, end with the foreign primary key name.

Yes                       No               
-------------             -----------------
EmployeeId (PK)

Customers                 Customers
=========                 =========
CustomerId (PK)           Customer_Id (PK)
SupportEmployeeId (FK)    SupportPerson (FK)
InitialEmployeeId (FK)    InitialEmp_ID (FK)

Addresses                 Addresses
=========                 =========
AddressId  (PK)           AddressId (PK)
CustomerId (FK)           CustID    (FK)

Date Columns

Use a verb, and end the date or datetime column names with "On".

Following this convention often leads to clearer column meanings, and consistency. For example, DateToPlace or PlaceDateTime becomes PlaceOrderOn.

Yes            |No               
OrderedOn      |OrderDateTime
ShouldShipOn   |DateShipExpect, ExpectedShip_DT, AnticipatedDate

Don't Abbreviate

Abbreviations are, by their nature, ambiguous and culture-centric. Avoid them unless they are very common, consistent, and/or well-known in the organization or industry.

A good example of ambiguity is how to abbreviate "number". Even my schema example, "InvoiceNbr", is potentially ambiguous. But InvNo is worse. Is there another column "InvYes"?

Yes            |No               
Customers      |Custs
CustomerId     |CustId
FirstName      |Fname
InvoiceNbr     |InvNo

Use Consistent Names (and Abbreviations if You Must)

  • If you must abbreviate, be consistent.
  • If it's spelled "InvoiceNbr" in one table, it's that way in all tables.
  • If everyone knows what DestinationBOL means, that might be OK. But maybe it's better to expand it to DestinationBillOfLading.

When in doubt, refer to Flatt's Law #6: Clarity is more important than brevity.

Yes            |No               
InvoiceNbr     |InvoiceNum, InvNumber, Inv_Nmbr


To be honest, I've often found audit columns to be more trouble than they're worth. I think if auditing is needed, it's better to have a separate audit history table where you can record many kinds of changes, including deletions.

But, if I am using them, and am tracking who took an action, I don't link to another table (such as Users or Employees), but instead record the physical name. This significantly reduces linking, and makes it easy to indicate that a process (rather than a person) performed an action. In other words, UpdatedBy is a string column and contains a value like "cflatt" or "Nightly Batch Process".

  • CreatedOn
  • CreatedBy
  • UpdatedOn
  • UpdatedBy


Inheritance can be modeled in the database a couple of different ways. I prefer Table Per Type, as shown by the Employee and Salesperson tables. Note that the Salesperson table has an EmployeeId primary key. This is what implies the inheritance. In the application's class model, these would become:

public class Employee
  int EmployeeId
  string Name 

public class Salesperson: Employee 
  double CommissionPercent 

public class Customer
  int CustomerId
  string Name
  Employee SupportEmployee
  Employee InitialEmployee
  Salesperson Salesperson

Wednesday, March 8, 2017

TFS Continuous Integration - Agent Installation and Visual Studio Licensing

The Summary

A build agent is what takes care of actually running a build definition. Agents can be installed on machines separate from the TFS server, allowing workload distribution.

The simple way to understand an agent is to imagine how you'd create continuous integration yourself.

  1. You'd have a machine that could build the software. That means you'd have to install anything needed to accomplish the build, such as Visual Studio, 3rd party controls, certificates, tools, etc.

  2. You'd write a script that could automate the build and report errors.

  3. You'd create a way of running that script on demand, such as developing a Windows service.

That's all TFS Build is doing. You configure the steps (the build script) on the TFS server as a build definition. You install an agent on a machine that can check out the source code and successfully build the application. TFS calls the agent on demand.

It was unclear to me if I needed a licensed version of Visual Studio, or VS installed at all. The answers are:

  • If you're not using the Visual Studio build step, and only the MS Build step, you might be able to get away with installing the 2015 Build Tools.
  • However, you'll probably need Visual Studio installed. It does not need to be licensed, assuming it's not also being used for development.

VS 2015 Licensing White Paper

Using Visual Studio on the Build Server: If you have one or more licensed users of Visual Studio Enterprise with MSDN, Visual Studio Professional with MSDN, or any Visual Studio cloud subscription then you may also install the Visual Studio software as part of Team Foundation Server 2017 Build Services. This way, you do not need to purchase a Visual Studio license to cover the running of Visual Studio on the build server for each person whose actions initiate a build.


Installing an agent is pretty simple. Really, just read Ben Day's post and you'll find out what you need. It's slightly outdated, but close enough. I've also listed the steps, below.

The Installation

  1. Install everything needed to build the software. It's best to do this first.
  2. Download the agent from the TFS web. Manage Server (click the right corner gear) > click link "View collection administration page" > open Agent Queues tab > click "Download agent"
  3. Extract the zip into C:\TfsData\Agents[agent name]
  4. Run ConfigureAgent.cmd
  5. Mostly accept the defaults. The TFS server URL will be something like http://servername:8080/tfs. Answer Y to installing as a service.

After installation, you should see the agent in the Agent Queues.


Agent Versions

If you're using a local TFS installation, the agent version is tied to the TFS version. If you update TFS, be sure to update the agents. It's easy. In Agent Queues, right-click the queue and choose Update All Agents.

Adding Agent Capabilities

Normally, all you need to do is install the software with the capability, then restart the agent. However, here are a couple of articles related to capabilities.

How to Register Capabilities

The Wrap Up

Agents are just services that run build steps. An agent can be installed on almost any machine, letting you easily configure your build environment.

Monday, March 6, 2017

TFS Continuous Integration - ClickOnce Apps

The Summary

Oh, ClickOnce, you bane of development! You're always so attractive: easily created, self-updating installations. But, like a 21st century TV vampire, you end up sucking the life out of me when things get complicated.

In the case of continuous integration, we need to sign our application using a security certificate, to guarantee the publisher's identity. This makes sense, since the intent is that ClickOnce is installed and maintained from a web site.

So, there are two parts to manage in CI: the certificate and the signing.

There are several combinations for trying to build ClickOnce. Is your TFS on site, or are you using Visual Studio Online? Are you signing using a commercial, local-domain, or temporary certificate?

This document is for a specific circumstance:

  • Local TFS 2015
  • A temporary certificate

This post will not deal with publishing a ClickOnce application via CI.

The Problems

So what happens when you try to build a ClickOnce app and on a separate CI server (without Visual Studio installed)?

It fails, that's what. At minimum, in the above scenario of using the default temporary certificate (which you shouldn't), it will fail because the signing utility, signtool.exe, isn't installed on the server.

How do you manage the signing process on a locally hosted machine?

Locally Installed TFS

Install SignTool.exe on the Server

When a developer creates a ClickOnce app, she must have the ClickOnce tools installed. In Visual Studio 2015, this is a feature checkbox during installation. If your forgot, you can open Programs and Features, right-click Visual Studio, and choose Change to rerun the installation.

But we're not going to install Visual Studio on the server.

Just the Files?

Can we create the appropriate folder path on the server and just copy the needed files, instead of installing 1GB of utilities? Maybe. It looks like the required files are:


And the paths are:

C:\Program Files (x86)\Windows Kits\8.1\bin\x86
C:\Program Files (x86)\Windows Kits\10\bin\x86

I used the first path on Windows 10 and it worked fine. This is definitely worth testing before installing the SDK.

Via the Windows SDK

We need to install the super-bloated SDK. Which one?

If you're on Windows 8.1/10/2012/2012R2, install the Windows 10 SDK.

If you're on Windows 7/8/2008R2, install the Windows 8 SDK.

During installation, choose the Windows Software Development Kit. Funny, in Windows 10 they moved the SDK to the bottom!



On my Windows 10 machine, running Visual Studio 2015, when I installed the ClickOnce tools, signtool.exe was installed to the Windows 8.1 SDK folder instead of Windows 10. It works...but go figure.

The Wrap Up

For my specific case, it was relatively easy to get signing to work during the build. If a non-temporary certificate had been involved, I could have installed that to the server.

This doesn't answer what to do if using Visual Studio Online or some other continuous integration server. That will be an adventure for another day.

Friday, March 3, 2017

TFS Continuous Integration and Private NuGet Package Sources

The Summary

What if you

  1. Don't store your NuGet packages in source control.
  2. Have a NuGet package that's hosted in a private source.
  3. Need to locally test continuous integration.
  4. Or need offsite (i.e. cloud) continuous integration.

There are a few general solutions.

  1. Store the private packages in source control. I'd do this one.
  2. Set up public access to the private source. Not likely.
  3. For testing, set up a local private NuGet source (just point to the folder).

The Problems

NuGet packages are generally great. But there can be problems when it comes to source control and continuous integration.

Most sites' advice on storing NuGet packages in source control (regardless of git, Mercurial, TFS, SVN, etc) is: don't.

The reason is pretty simple. If package restore is enabled, they'll get downloaded and rebuilt anyway, so why store them and take up repository space? With the later NuGet installations, a cache is maintained in the user's profile, so a trip to the NuGet servers might not even be necessary.

The counter argument is also pretty simple. What if the NuGet source isn't available? Suddenly, you can't restore your packages, can't build, can't work. When would this happen?

All of these assume you've downloaded from source control (such as GitHub or TFS), but haven't built the code yet.

  1. You get on a plane and then try to build. If you have no Internet, you can't get the packages.
  2. For some other reason you don't have an Internet connection.
  3. You're connected, but the NuGet site is down.
  4. One of the packages uses a locally (i.e. corporate internal) hosted NuGet source. It's not on the internet, so you can't download the package.
  5. It's been a long time since anyone's built the source, and a package has been removed from NuGet.
  6. You use the cloud for continuous integration, which won't have access to your private NuGet source.

Most of the above can be solved by building the solution immediately after getting it. But here's a real-world example of number 4. I was working for a client, and had access to their Team Foundation Server. I got the source via a VPN connection, copied it to my laptop, and tried to build. This client has several NuGet packages they host locally. They're proprietary, so hosting them on a public site like NuGet would be wrong.

And I didn't have access to that package source, so I couldn't build.

Again, these problems could be solved, and maybe they point to some environment changes needing to be made. But wouldn't it be just as easy to include the package in source control?

Including the NuGet Package Folder in TFS

If you're using a version of TFS prior to 2012, I can't help you (and you should upgrade). Starting in TFS 2012, the tfignore file became available. The purpose of tfignore is just like gitignore and hgignore: tell source control which files to not display for adding/tracking.

But it can also be used to explicitly allow files. Why is this needed? Because, unlike git or Mercurial, TFS + Visual Studio ignores certain files by default, and I haven't found a way to change that or even find out what files those are. Dlls are ignored by default, for example.

  1. Create the .tfignore file in the root of the team project.
  2. Edit the file.
  3. Add the package fies to TFS.
  4. Add your .tfignore file to source control.

The manual way to create a .tfignore file is, in the project folder root

  1. Right-click > New > Text File
  2. Enter .tfignore.

See the trailing period? That's the magic sauce. Otherwise, Windows doesn't let you create a file/folder with a leading period.

Visual Studio shows package folders in Excluded Changes by default. But dlls, and sometimes lib folders, are not included, and we absolutely need those.

Here's how to include the entire packages folder. The leading ! means "don't ignore".


These don't work. Note, especially, that leading backslash doesn't work, even thought that's what VS itself will create via the GUI.

  • !\packages*.*
  • !packages
  • !packages\

To include a specific package, such as one that's hosted in a private NuGet source, ignore the packages folder, then specify the package path without the version number.


In Visual Studio Team Explorer, choose Pending Changes, find Excluded Changes, click Detected: x adds(s).


Deselect all files (they're selected by default, a bad choice), then check the ones you want and Promote them (which is the same as "add" in git or Mercurial).


Using a Private Repo for Testing

If you're testing CI on a separate machine, you can use or modify a nuget.config file to include that source. I tried this at the project level, but it didn't work, so I created the nuget.config file at the solution level. (But that ability is supposedly removed in later NuGet versions. So confusing!)

Here's an example file:

<?xml version="1.0" encoding="utf-8"?>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
    <add key="" value="" protocolVersion="3" />
  <add key="local" value="C:\Users\charles\Documents\Testing\NuGet\" />

Again, this must not end up in the production source control repository!

The Wrap Up

I'm not weighing in on storing NuGet packages in source control, except to say that there are some situations where it's clear to me it's a good ideal. Having a private NuGet source involved is one.

Wednesday, February 22, 2017

TFS Continuous Integration Walk Through Part 5b - Multiple Solutions: Simple Project References

ci-logoThis is part of a series of walk throughs exploring CI in TFS, starting from the ground up. The entire series and source code are maintained at this BitBucket repository.

Previous Part: TFS Continuous Integration Walk Through Part 5a - Multiple Solutions - Overview

A New Beginning

One of my goals for the Multiple Solutions walkthroughs is to put the TFS repository in a, let's say, less than stellar organizational state, and then clean up. To do that, I'll start with a new, clean TFS collection.

Open TFS Administration Console, select Team Project Collections, and click Create Collection.


Choose a silly name like "CICollection2". Next.


Verify the SQL server instance and create a new database. Next, Verify, Create.


It'll take a few minute to create the collection. Click Complete, then Close. You can close the Admin Console, too.


Connect Visual Studio to the New Collection

This was stupidly difficult to figure out. Google searches yielded nothing.

Open Visual Studio and the Team Explorer, then open Manage Connections.


Drop down "Manage Connections" and choose Connect to Team Project.


Select CICollection2, then click Connect.


This is the first time I'm using this collection, so I need to map my TFS Workspaces.

I hate TFS workspaces.


Pick one of the "map workspaces" links, take the defaults, click Map & Get.


Now I'm connected to the new collection.


Create a new Team Project

In Visual Studio Team Manager, create a new Team Project by choosing Home > Projects & My Teams > Create Team Project.


Remember, one of my intentions is a messy collection. So, I'll name my team project Main. (I'm tempted to name it "Turing" and make life even worse, but this will do.) Click Next.


The default Agile process is fine. Next.


We're sticking with Team Foundation Version Control. Click Finish.


After a few seconds our team project is created.


Add the TuringDesktop Solution

In Visual Studio, create a new Console project. In the dialog, check the Add to Source Control box.


In the source control dialog, accept the defaults. We're adding our solution folder to the root.


So, right now our TFS collection structure is:


If you remember from the previous part, our desktop app is going to have two dependencies: the Magic8Engine, and TextColor. Right now, I don't know Magic8 is going to be a shared NuGet package, so I just add it as a new Class Library project. The same with TextColor.

I'm also going to need a unit test project, so I'll add an MSTest project now as well. I'm going to name it TuringDesktop.Tests, even though it's going to contain tests for all my projects.


In the end, this is my solution's folder structure.


Our First Suite


Assign TextWriter to MemoryWriter
Mocking System Console Behaviour
Magic 8-Ball

Below is all the code and tests for the solution. This is my super-pre-release versions, and has some (mostly intentional) problems.

The Solution Projects

Right now I've got everything in one solution. I'll explain each project in turn.


This is a Console application. It references the Magic8Engine and TextColor projects.


I'm simulating a database with a class that returns a list of answers, and is initialized with a connection string. For this simple sample, the connection string is just a name like "production". If the connection string is named something other than "production", then the name is prepended to each answer. For example, if the connection string were "stage", an answer might be "STAGE: It is certain."

using System;
using System.Collections.Generic;
using System.Linq;

namespace TuringDesktop
    public static class AnswerDatabase
        /// <summary>
        /// Any dbName other than "prod" prepends answers with dbName.
        /// Ex. dbName = "stage", then answer is "STAGE: It could be so."
        /// </summary>
        /// <param name="connectionString"></param>
        public static IEnumerable<string> GetAnswers(string connectionString)
            if (String.IsNullOrWhiteSpace(connectionString))
                throw new ArgumentException("dbName cannot be null or empty");
            if (connectionString == "production")
                return DefaultAnswers;
                return DefaultAnswers.Select(a => connectionString.ToUpper() + ": " + a);

        private static IEnumerable<string> DefaultAnswers
                yield return "It is certain";
                yield return "It is decidedly so";
                yield return "Without a doubt";
                yield return "Yes, definitely";
                yield return "You may rely on it";
                yield return "As I see it, yes";
                yield return "Most likely";
                yield return "Outlook good";
                yield return "Yes";
                yield return "Signs point to yes";
                yield return "Reply hazy try again";
                yield return "Ask again later";
                yield return "Better not tell you now";
                yield return "Cannot predict now";
                yield return "Concentrate and ask again";
                yield return "Don't count on it";
                yield return "My reply is no";
                yield return "My sources say no";
                yield return "Outlook not so good";
                yield return "Very doubtful";

Here's the Program class with my Main code. There are a few things to note:

  • I removed the args parameter from the Main method. I don't need it.
  • I made the Main method public, so it can be call from unit tests.
  • I'm using Dependency Injection for the answer engine (Oracle) and the console text colorizer (ConsoleColorizer).
  • There are three settings using a string literal: _connectionString, questionColor and answerColor.
using System;
using TextColor;
using Magic8Engine;

namespace TuringDesktop
    public class Program
        static string _connectionString = "production";
        static IConsoleColorizer _colorizer = new ConsoleColorizer();
        static IOracle _oracle = new Oracle(AnswerDatabase.GetAnswers(_connectionString));

        public Program() { }
        public Program(IConsoleColorizer colorizer, IOracle oracle)
            _colorizer = colorizer;
            _oracle = oracle;

        public static void Main()
            string name = "";
            string answer = "";
            string question = "";
            string questionColor = "Red";
            string answerColor = "green";

            Console.Write("Welcome! I'm AT. Please tell me your name. >> ");
            name = Console.ReadLine();
            Console.WriteLine("Good to meet you, " + name + ". "
                + "Ask me a yes-or-no question and I'll give you an answer. "
                + "When you're finished, say 'bye'.");
                    _colorizer.ColorizeWriteLine("Question?", questionColor);
                    question = Console.ReadLine();
                    if (question.ToLower() == "bye") { break; }
                    answer = _oracle.GetAnswer();
                    _colorizer.ColorizeWriteLine(answer, answerColor);
                catch (Exception ex)
                    Console.WriteLine("Error: " + ex.GetBaseException().Message);
                    Console.Write("Press any key to quit.");
            while (true);

The program's output is simple enough. It greets the user, asks for a name, then starts answering questions. Text coloring is used to make things clear.



I have an IOracle inteface, so that it's easier to use in unit tests.

namespace Magic8Engine
    public interface IOracle
        string GetAnswer();

The concrete class is initialized with a list of answers (the "database") from which it randomly selects.

using System;
using System.Collections.Generic;
using System.Linq;

namespace Magic8Engine
    public class Oracle : IOracle
        Random _random = new Random();
        private IEnumerable<string> _answers = new List<string>();

        public Oracle(IEnumerable<string> answers)
            _answers = answers;

        public string GetAnswer()
            if (_answers.Count() == 0)
                return "";
            int index = _random.Next(_answers.Count());
            return _answers.ToArray()[index];


The ConsoleColorizer class also implements an interface to improve unit testing.

using System;

namespace TextColor
    public interface IConsoleColorizer
        void ColorizeWriteLine(string text, string colorName, bool resetColor = true);
        ConsoleColor GetConsoleColor(string colorName);
        void ResetConsoleColor();
        void SetConsoleColor(string colorName);

The concrete class sets properties on the Console object.

using System;

namespace TextColor
    public class ConsoleColorizer : IConsoleColorizer
        public void ColorizeWriteLine(string text, string colorName, bool resetColor = true)
            if (resetColor) { Console.ResetColor(); }

        public ConsoleColor GetConsoleColor(string colorName)
            ConsoleColor color;
            if (Enum.TryParse<ConsoleColor>(colorName, true, out color))
                return color;
                throw new ArgumentException("Invalid ConsoleColor: " + colorName);

        public void SetConsoleColor(string colorName)
            Console.ForegroundColor = GetConsoleColor(colorName);

        public void ResetConsoleColor()

This should raise a red flag. System.Console is a dependency. Why didn't I abstract that out? I could have. For example, I could have created an IConsole interface with just the features I'm using, then an explicit SystemConsole wrapper class that implented the interface. But it turned out I could test Console successfully without needing to make it entirely replacable.

Good question though. Glad you're thinking.


Finally, our test project has a class for each project we're testing.


Here are the unit tests for each project. I'm using nested classes to keep the tests readable.

Magic8 Tests

Note how these test make use of passing in a custom "answer database".

using System.Collections.Generic;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Magic8Engine;

namespace TuringDesktop.Tests
    public class OracleClass
        public class GetAnswer_Should: OracleClass
            static List<string> _answers;

            public void TestInitialize()
                //Start each test with empty list;
                _answers = new List<string>();

            public void ReturnARandomAnswerEachTimeItIsCalled()
                _answers.AddRange(new string[] { "a", "b", "c" });
                var oracle = new Oracle(_answers);
                List<string> answers = new List<string>();
                for (int i = 0; i < 10; i++)
                int uniqueAnswers = answers.Distinct().Count();
                Assert.IsTrue(uniqueAnswers > 1);

            public void ReturnAnEmptyAnswerIfAnswerListIsEmpty()
                var oracle = new Oracle(_answers);
                string actual = oracle.GetAnswer();
                Assert.AreEqual("", actual, "Answer list count: " + _answers.Count());

TextColor Tests

Neither of these tests confirms the correct color gets set. That's really something best left to a human to verify. But notice in WriteTheUserEnteredString how I'm using the Console.SetOut property to let me verify that user input actually gets written to the console.

using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.IO;
using TextColor;

namespace TuringDesktop.Tests
    public class ConsoleColorizerClass
        public class GetConsoleColor_Should
            public void ReturnTheEnumUsingCaseInsensitive()
                var tc = new ConsoleColorizer();
                string colorName = "gReEn";
                Exception actualEx = null;
                string errorMsg = "";
                //Will change to Green below.
                ConsoleColor color = ConsoleColor.Black;
                    color = tc.GetConsoleColor(colorName);
                catch (Exception ex) { actualEx = ex; errorMsg = "Threw error " + ex.GetBaseException().Message; }
                Assert.IsNull(actualEx, errorMsg);
                Assert.AreEqual(ConsoleColor.Green, color);

            public void WriteTheUserEnteredString()
                //Store the Console output in a stringwriter.
                StringWriter sw = new StringWriter();
                var tc = new ConsoleColorizer();
                tc.ColorizeWriteLine("blamo", "Red");
                Assert.AreEqual("blamo\r\n", sw.ToString());

Console App Tests

If you're thinking the Main method should be refactored, you're right. I'll do that later. For now, it's interesting to see how we can test a console app.

First, I'll create my mock objects. I'm not even checking MockOracle's GetAnswer method, but that's OK. What I'm guaranteeing is my unit tests have no external dependencies.


    public class MockOracle : Magic8Engine.IOracle
        public string GetAnswer()
            return "This is a fake answer";


In this mock, I store the simulated user's input in a list, and I always return a black console color. I don't set or reset console colors.

Remember, I'm not testing if the colorizer works. There are already unit tests for that. I'm testing if the Main method works. I just need my mock objects to return consistent results quickly.

    public class MockColorizer : TextColor.IConsoleColorizer
        public List<string> ConsoleLines = new List<string>();

        public void ColorizeWriteLine(string text, string colorName, bool resetColor = true)

        public ConsoleColor GetConsoleColor(string colorName)
            return ConsoleColor.Black;

        public void ResetConsoleColor() { }

        public void SetConsoleColor(string colorName) { }

Here are the tests for the Main method. Notice how I'm using StringWriters for both Console.In and Console.Out. This lets me buffer all the responses a user would make, and capture the console's output. I use a couple of helper methods to make this work.

What I'm not trying to do is test if Console works. I'm testing if I'm my code that writes to the console works. But, unfortunately, I do have a dependency on Console.

So, are these unit tests, or integration tests? Short answer: integration. I'll make some improvements later to isolate code that doesn't depend on Console.

There's a real danger here that I'll forget to use SendUserInputs, leading to the application hanging. I know this danger exists because I did it. The tests are brittle.

    public class ProgramClass
        public class Main_Should
            StringWriter _consoleOut = new StringWriter();
            string UserInputs = "";

            public void TestInitialize()
                //Store the Console output in a stringwriter.

            public void DisplayInitialGreetingMessage()
                var mockColorizer = new MockColorizer();
                var mockOracle = new MockOracle();
                var program = new Program(mockColorizer, mockOracle);
                string expected = "Welcome! I'm AT. Please tell me your name.";
                string output = _consoleOut.ToString();
                Assert.IsTrue(output.IndexOf(expected) >= 0);

            public void DisplayWelcomeWithName()
                var mockColorizer = new MockColorizer();
                var mockOracle = new MockOracle();
                var program = new Program(mockColorizer, mockOracle);
                string expected = "Good to meet you, Charles.";
                string output = _consoleOut.ToString();
                Assert.IsTrue(output.IndexOf(expected) >= 0, "Output was: " + output);

            #region "Test Helpers"
            private void AddUserInput(string value)
                UserInputs += value + Environment.NewLine;

            private void SendUserInputs()
                //Send all the inputs needed for the Read and ReadLine statements
                StringReader consoleIn = new StringReader(UserInputs);

Running my tests, I get nice, readable output.


Initial Continuous Integration Build Definition

In Visual Studio Team Explorer, click Builds.


Click New Build Definition.


This will open the web page and prompt for a definition. This is nice because we're taken directly to our team project builds.


Choose the Visual Studio template, check the box that says Continuous Integration and accept the defaults. Save the definition, naming it TuringDesktop Suite Build.

If you've been following along, you'll realize that we've never committed any of our source code! Do so now and the build should happen automatically and successfully. Be sure to verify that all the tests ran!



Next Up

We have a working application that's automatically built in our CI server. But we've also got some problems:

  • Refactoring for the main application's Console dependency.
  • String literals for settings.
  • Our application's data source is the same in development, CI, and production.
  • No separate integration tests.

In short, we're about to go from nice, sunny sample development to real-world, why-does-this-have-to-be-so-hard programming.

Next Part: TFS Continuous Integration Walk Through Part 5c - Multiple Solutions: Build Settings