A pleasant walk through computing

Comment for me? Send an email. I might even update the post!

IdentityServer3 with PKCE Part 1 - Simple OAuth2 Server

This series simulates a native application accessing a protected Web API resource, using OAuth2 via IdentityServer3. It demonstrates using Proof Key for Code Exchange (PKCE), and is in four parts:

  1. Build a simple authorization server, consumed by native application.
  2. Build a protected resource.
  3. Persist server configuration to database.
  4. Persist user data to database using Microsoft.Identity and SQL Server.

Important
This series does not create an OpenID Connect (OIDC) server. It is OAuth-only, since the PKCE specification doesn't require OIDC.

Overview - Why PKCE?

References
A Guide to OAuth 2.0 Grants Proof Key for Code Exchange

Mobile (i.e. "native") applications are difficult when it comes to authorizing users. The reasons are: they are long-lived, and can't keep a secret.

In the descriptions below, "two-step" means calling the authorization endpoint then the token endpoint, "one-step" means calling just the token endpoint, "trusted" means the client can keep a secret key, "short-lived" means only an access token, and "long-lived" means an access+refresh token.

The standard OAuth 2 grants are:

  • Authorization code grant (two-step, trusted, long-lived)
  • Implicit grant (one-step, short-lived, meant for javascript-only browser apps)
  • Resource owner credentials grant (one-step, trusted, long-lived, for passing user/password)
  • Client credentials grant (one-step, trusted, short-lived, meant for app access, not users)
  • Refresh token grant (one-step, requires access token)

None of these fits well for a mobile app, which is untrusted but long-lived. The solution is Proof Key for Code Exchange (PKCE). PKCE generates a temporary secret string and a way to verify that string. It sends the secret to the authorization endpoint, which stores it, then sends the validator to the token endpoint, which verifies the stored secret. This mitigates the threat of another application capturing the authorization code; without the secret/validator, the auth code is useless.

Authorization Server - Simplistic

References
Simplest OAuth Server
Simplest OAuth2 Walkthrough Code

We'll create a simple OAuth2 server using IdentityServer3. We won't even create a Web API, so we can see just the server in operation. In other words, we'll be able to get tokens, but have nothing to use them with.

Create a new Empty web application project named AuthAndApi

2017-05-03_135031

Install packages for the authentication server. This assumes using IIS for the web host.

Install-Package IdentityServer3 -Project AuthAndApi
Install-Package Microsoft.Owin.Host.Systemweb -Version 3.1.0 -Project AuthAndApi

All the IdentityServer documents say RAMFAR should be enabled in web.config.

  <system.webServer>
    <modules runAllManagedModulesForAllRequests="true" />
  </system.webServer>

Add Folders for IdOptions and IdUsers. These represent the database storage we'll configure later.

In IdOptions folder, add these classes.

Scopes.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Models;

namespace AuthAndApi.IdOptions
{
    public class Scopes
    {
        public static List<Scope> Get()
        {            
            return new List<Scope>
            {
                //Create the email scope for OAuth. Later, when using OpenID Connect, return StandardScopes.Email instead
                new Scope() { Name = "email", DisplayName = "Email", Claims = new List<ScopeClaim>() { new ScopeClaim("email", true) } }
            };
        }
    }
}

Clients.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Models;

namespace AuthAndApi.IdOptions
{
    public class Clients
    {
        public static List<Client> Get()
        {
            return new List<Client>()
            {
                //A client is an application configured to request tokens.
                //The resource, such as Web API, is configured to accept these tokens.
                new Client()
                {
                    ClientName = "Pretend Android Mobile App",
                    ClientId = "a065ae9f-0d02-45fa-85b6-4dc93e2ad5ef",
                    Enabled = true,
                    //About Reference vs JWT tokens
                    //https://leastprivilege.com/2015/11/25/reference-tokens-and-introspection/
                    AccessTokenType = AccessTokenType.Reference,
                    Flow = Flows.AuthorizationCodeWithProofKey,
                    AllowedScopes = new List<string>
                    {
                        "email"
                    },
                    //URIs the authorization code is allowed to be redirected to
                    RedirectUris = new List<string>()
                    {
                        "http://localhost:19191/"
                    },
                    //Shouldn't be required for this flow, but is
                    ClientSecrets = new List<Secret>()
                    {
                        new Secret("e9711973-edd7-496f-b415-b10ad0667305".Sha256())
                    }
                }
            };
        }
    }
}

In IdUsers folder:

Users.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Services.InMemory;
using System.Security.Claims;
namespace AuthAndApi.IdUsers
{
    public class Users
    {
        public static List<InMemoryUser> Get()
        {
            return new List<InMemoryUser>
            {
                new InMemoryUser
                {
                    //Subject is identifier? Could be external identity,
                    //so it's not our own database primary key
                    Subject = "1",
                    Username = "alice",
                    Password = "secret",
                    Claims = new List<Claim>()
                    {
                        new Claim("email", "alice@example.com")
                    }
                }
            };
        }
    }
}

Add Startup.cs. This is the class that builds the OWIN pipeline, so the order in which features are added is important.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Configuration;
using Owin;
using AuthAndApi.IdOptions;
using AuthAndApi.IdUsers;


namespace AuthAndApi
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            //Configure IdentityServer for issuing authentication tokens
            var options = new IdentityServerOptions
            {
                Factory = new IdentityServerServiceFactory()
                .UseInMemoryClients(Clients.Get())
                .UseInMemoryScopes(Scopes.Get())
                .UseInMemoryUsers(Users.Get()),
                //SSL MUST be used in production
#if DEBUG
                RequireSsl = false
#endif
            };
            //add to the pipeline
            app.UseIdentityServer(options);
        }
    }
}

The basic IdentityServer is configured for OAuth. For now, we don't need a web page to open, since this is a server, so turn off displaying the page in project Properties.

2017-05-06_074519

Note about Reference vs JWT tokens
There are two general types of access tokens IdentityServer can return: Reference, or JSON Web Token (JWT). JWTs are self-contained; they can be validated without contacting the server. But they're slightly more complicated to configure. One advantage of using Reference tokens is that they can be quickly revoked, since the resource must call the authorizing server on each use. The disadvantage is this increases network traffic.

Pretend Mobile App - Get Token

References
Simplest OAuth Server
Simplest OAuth2 Walkthrough Code

Now we're going to get an authorization code and token from the server, using a native application that simulates a mobile app. In the Authorization Code Grant, the authorization code is retrieved by using an agent--normally a web browser. Mobile apps can open browsers on behalf of applications, providing a seamless experience. They can also register special URIs to receive requests. The basic flow is:

Front Channel

  1. App requests code from the server's authorization endpoint, and includes what URI to post the results to.
  2. Browser opens, server presents authentication/authorization screens, then posts code to supplied URI.

Back Channel

  1. App reads the code and sends request to the server's token endpoint.
  2. Server validates, then responds with the access token.

Add a standard console application to the solution, named PretendMobile.

2017-05-06_083908

Add packages. These aren't required (and wouldn't be available for a mobile application). They just include helper classes and methods for the HTTP communication. Anything they do could be done with HttpClient or some other appropriate network library.

Note
Version number is important! Later versions are for .Net Core.

Install-Package IdentityModel -Version 1.9.2 -Project PretendMobile

Program.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
//Added
using System.Diagnostics;
using System.IO;
using System.Net;
using System.Net.Http;
using IdentityModel;
using IdentityModel.Client;

namespace PretendMobile
{
    class Program
    {
        static void Main(string[] args)
        {
            var response = GetAccessToken();
            Console.WriteLine("Access Token: " + response.AccessToken);
            Console.ReadLine();
        }

        static TokenResponse GetAccessToken()
        {
            string clientId = "a065ae9f-0d02-45fa-85b6-4dc93e2ad5ef";
            string clientSecret = "e9711973-edd7-496f-b415-b10ad0667305";
            string scope = "email";
            string redirectUri = "http://localhost:19191/";
            string verifier = CryptoRandom.CreateUniqueId(64);
            string codeChallenge = verifier.ToCodeChallenge();
            string authorizationEndpoint = "http://localhost:27230/connect/authorize";
            string tokenEndpoint = "http://localhost:27230/connect/token";

            //Front channel to get the authorization code
            //This simulates the mobile app starting a browser session on the authorization server,
            //allowing the user to authenticate,
            //while the app waits for the redirect from the auth server that contains the code.
            OpenBrowserToAuthenticate(authorizationEndpoint, clientId, scope, redirectUri, codeChallenge);
            string code = ReceiveAuthCodeFromServer(redirectUri);
            Console.WriteLine("Authorization Code: " + code);

            //Back channel to get the access token
            //The client secret is still required by IdentityServer, even though it's not
            //really secret. The code challenge and verifier are the real secret.
            //The redirectUri is part of the verification process. There's no actual redirection.
            var client = new TokenClient(
                tokenEndpoint,
                clientId,
                clientSecret);
            return client.RequestAuthorizationCodeAsync(code, redirectUri, verifier).Result;
        }

        static void OpenBrowserToAuthenticate(string authorizationEndpoint, string clientId, string scope, string redirectUri, string codeChallenge)
        {
            string nonce = CryptoRandom.CreateUniqueId(64);

            AuthorizeRequest request = new AuthorizeRequest(authorizationEndpoint);
            string url = request.CreateAuthorizeUrl(
                clientId: clientId,
                responseType: "code",
                scope: scope,
                redirectUri: redirectUri,
                nonce: nonce,
                responseMode: OidcConstants.ResponseModes.FormPost,
                codeChallenge: codeChallenge,
                codeChallengeMethod: OidcConstants.CodeChallengeMethods.Sha256);

            Debug.WriteLine(url);
            Process.Start(url);
        }

        static string ReceiveAuthCodeFromServer(string redirectUri)
        {
            var web = new HttpListener();
            web.Prefixes.Add(redirectUri);
            Console.WriteLine("Listening for request from auth server...");
            web.Start();
            var req = web.GetContext().Request;
            Stream body = req.InputStream;
            var encoding = req.ContentEncoding;
            var reader = new StreamReader(body, encoding);
            string code = reader.ReadToEnd().Replace("code=", "");
            Console.WriteLine("Got it, closing.");

            //Not sure how, but the mobile app should end the browser session, maybe when the app
            //gets the response.
            body.Close();
            reader.Close();
            web.Close();
            return code;
        }

    }
}

Let's configure the solution to run both projects. Right-click Solution and choose Properties. Change the Startup Project to "Multiple startup projects," the Actions to "Start," and be sure the identity server project starts first.

2017-05-06_091226

Run the solution.

  1. The server will start.
  2. The console application will open a browser to the /authorize endpoint, requesting an authorization code.
  3. Authenticate using credentials "alice" and "secret".
  4. Authorize the requested scope(s).
  5. The authorization code is sent back to the listening application.
  6. The app sends a direct request to the /token endpoint, requesting an access token.
  7. The server returns the token.

2017-05-06_1044142017-05-06_1045022017-05-06_1046512017-05-06_104726

Wrap Up

We've built a basic OAuth2 server that uses the Authorization Code Grant plus PKCE for dynamically generated client secrets, and can get an access token. In the next part, we'll build a protected resource that requires a token.

TFS Continuous Integration Walk Through Part 5c - Multiple Solutions: Dependencies

ci-logo This is part of a series of walk throughs exploring CI in TFS, starting from the ground up. The entire series and source code are maintained at this BitBucket repository.

https://bitbucket.org/bladewolf55/tfs-ci-samples

Previous Part: TFS Continuous Integration Walk Through Part 5b - Multiple Solutions: Simple Project References

What I'm Talking About

Businesses often have source code that's years or decades old, and accumulate problems due to how that code was structured into folders. Especially difficult is how dependencies were managed. It's common to see:

  • Projects with project references to other projects in other solutions, sometimes nesting/cascading in a tangled mess.
  • Third party libraries that require installation, such as UI controls.
  • Multiple ways of managing .dll dependencies.

Some common challenges--and reasons why the above happen--are:

  • Multiple projects depend on a shared project, and they often need to step through the shared project's code.
  • Over the years, different developers did things how they liked.
  • Source control wasn't used, or changed.

Where I'll End Up

I'll start with a set of solutions that have some dependency problems. I'll show how they can work with continuous integration. Then, I'll improve the dependency handling.

A Problematic Structure

Let's imagine a TFS repository. Instead of a separate Team Project for each solution, there's a single Team Project named $Main that has all the solutions underneath it.

In this folder structure, I'm showing solution folders with their project folders below. So, ReverseIt is a solution folder with the ReverseIt project folder below it, which is the default Visual Studio layout.

$/Main
_Shared
   NameDb
     NameDb
ReverseIt
  ReverseIt
    > Depends on RevEngine (project reference)
  RevEngine
    > Depends on jamma.dll
ReverseNames
  ReverseNames
    > Depends on RevEngine (project reference)
    > Depends on NameDb (NuGet package)
  • NameDb DLL returns a list of names, is packaged using NuGet, and stored in a local source.
  • ReverseIt Console reverses whatever text you type in.
  • RevEngine DLL has ReverseText method. It is a project reference.
  • Jamma.dll is a third party security dll. The company is out of business.
  • ReverseNames Console displays a list of reversed names coming from NameDb.

What are the pros and cons of this approach?

Pros

  • If you Get Latest on $Main, all the solutions are in their correct relative folders.

Cons

  • You often have to get source you don't need.
  • The dependency on relative paths is brittle.
  • You can't use TFS's project management tools effectively.
  • Doesn't scale. What if you had fifty solutions using this approach?

Creating CI Builds As Is

My manager says, "We need to get these projects into TFS Build."

I ask, "Can I restructure TFS?"

He says, "Not yet."

I say, "OK."

Since I'm pretty sure there are dependency problems, the first thing I decide to do is spin up a clean machine, install Visual Studio with no changes, Get Latest on $Main, and try to build all the solutions.

What's this!? Multiple failures? Oh, no! What went wrong?

  1. ReverseNames failed because we're depending on an in-house NuGet package source, and didn't configure that, so the NameDb dependency didn't exist.
  2. RevEngine failed because Barry's the only developer who has ever worked on RevEngine, and only his machine has jamma.dll. It was never checked into source control.

Quite a bit more could go wrong, but you get the idea. Let's fix these with an eye toward our eventual build server.


BONUS CODE!

If I had lots of solutions, I could build all of of them using two files in the root of the folder than has all the solution folders.

BuildAllSolutions.targets

<Project ToolsVersion="14.0"
         xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
         DefaultTargets="Default">
  <!-- http://stackoverflow.com/a/9609086/1628707 -->
  <ItemGroup>
    <AllFiles Include=".\**\*.sln"/>
  </ItemGroup>
  <Target Name="Default">
    <PropertyGroup>
    <BuildCmd>@(AllFiles->'&quot;$(MSBuildToolsPath)\msbuild.exe&quot; &quot;%(Identity)&quot; /v:q /fl /flp:errorsonly;logfile=build-logs\%(filename)-log.txt','%0D%0A')</BuildCmd>
    </PropertyGroup>
    <Exec Command="mkdir build-logs" Condition="!Exists('build-logs')" />
    <Exec Command="$(BuildCmd)" />
  </Target>
</Project>

BuildAllSolutions.cmd

--rem path to your latest VS build version
"C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe" BuildAllSolutions.targets
pause

Running the cmd file creates a folder named "build-logs", recursively builds each solution, and outputs each solution's errors. If a solution's log file is not empty, there was a build problem.

END BONUS CODE


Dealing With a Local NuGet Package Source

There are four (technically five or six!) places to store NuGet config files containing package source information, and two ways to configure package source in TFS Build.

NuGet Config File Locations

Let's assume our in-house NuGet source is located at http://ngserver/nuget.

  1. User Profile - Enter it into Visual Studio's settings. This is fine for regular development, but not good for a build server because the build agent service will run as either Local System or a specific user account such as "tfsagent".

2017-03-17_155817

You can also manually edit the user profile's nuget.config, which is what the Visual Studio setting dialog is doing. The file is located at %APPDATA%\NuGet\NuGet.config. You add the source under packageSources.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <activePackageSource>
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
  </activePackageSource>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
    <add key="NuGet Local Source" value="http://ngserver/nuget" />
  </packageSources>
  <packageRestore>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
  </packageRestore>
  <bindingRedirects>
    <add key="skip" value="False" />
  </bindingRedirects>
</configuration>
  1. Solution - Create a solution-level nuget.config file.

You can create a file named nuget.config, put it in your solution's root and add it to source control. This will determine which NuGet sources the solution uses.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageRestore>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
  </packageRestore>
  <packageSources>
    <!-- uncomment clear if you want to ONLY use these sources -->
    <!-- otherwise, these sources are added to your list -->
    <!-- clear /> -->
  <add key="NuGet Local Source" value="http://ngserver/nuget" />
  </packageSources>
</configuration>

Note: NuGet 3.3 and earlier looked for a config file in a solution's .nuget folder. Not recommended.

  1. Machine-Wide - Create a machine-wide config file.

The machine-wide story is confusing. A machine-wide NuGet config file can reside in one of two folders. The folder changed with the introduction of NuGet 4.0, which is used by Visual Studio 2017.

  • %ProgramData%\NuGet\Config\ (NuGet 3.x or earlier), or
  • %ProgramFiles(x86)%\NuGet\Config\ (NuGet 4.0+)

It can be named anything that ends with .config, including NuGet.config. However, a custom name seems recommended.

For example, I could name the file SoftwareMeadows.Online.config. It would contain the package source like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet Local Source" value="http://ngserver/nuget" />
  </packageSources>
</configuration>

Network Administrators will like this option because they can use it with Group Policies. A policy could target computers in Developers and Build Servers groups and always create the desired config file.

Note: NuGet 4.x does not look for config files in ProgramData.

  1. Default Config

If you're using NuGet 2.7 through 3.x, default sources can also be configured in the file %ProgramData%\NuGet\NuGetDefaults.config. These show up in Visual Studio as local, not machine-wide, sources.

Note: This file and location will not work with NuGet 4.x

  1. Other?

You could also put a nuget.config file in any folder and specify it using the nuget.exe -configFile switch, e.g. nuget restore -configfile c:\my.config.

Notes

  • Testing shows that in some cases if the package source URL is the same, only one source key is used. For example, if NuGet.config and NuGetDefaults.config have an identical source URL, the key from NuGet.config is used.
  • It appears a source listed in NuGetDefaults.config cannot be removed using the <clear /> tag. It can only be disabled.

Specifying the Config in TFS Build

Whichever method you use below, ensure the agent service has permissions to the config file. The service name will be something like "VSO Agent ([AgentName])". Microsoft recommends creating a user named "tfsagent". The default is Local Service.

2017-03-17_162121

  1. TFS Machine-Wide Path - RECOMMENDED

Personally, for internal development, I'd add the package source to the build server's machine-wide config file and be done with it. So, my path--assuming VS 2015 installed--would be something like: %ProgramData%\NuGet\Config\SoftwareMeadows.Online.config

2017-03-17_162400

Remember from above this will change if you install Visual Studio 2017 on the build server (or use NuGet 4.x).

  1. Add nuget.config to the build agent's profile.

Your build server is basically a development machine, with an agent automatically building the software. If you run the service using tfsagent, you could create/edit a nuget.config file found at C:\Users\tfsagent\AppData\Roaming\NuGet.

  1. TFS NuGet Installer Path Field

If you check in a nuget.config file with the solution, enter the path in your build definition's NuGet Installer step. This path is relative to the Path to Solution. I would use this solution if my team didn't all work in the same network, and so needed to use an authenticated NuGet server such as MyGet.

2017-03-17_162452

  1. Use the -configFile switch

You could also put a nuget.config file somewhere on the build server (or network?), and use the -configFile switch. Remember the build agent service needs permission to read the file.

2017-03-17_162410

Dealing with Barry

Barry's been with us for five years. Barry drinks his coffee black, and lots of it. Barry knows where every file on his machine is, and would prefer you didn't look over his shoulder. Barry has his code, please leave it alone.

Unfortunately, Barry assumes he'll always be here, and hasn't ever tested what would happen if his machine imploded in a fiery death. I go to Barry and say, "Your code doesn't build on a clean checkout." Barry storms over to my computer and starts typing. I observe, take notes, and when I see him copying jamma.dll, ask, "What's that?"

"Oh," he mumbles, "license dll. Forgot about that. Kinda important. RevEngine won't run without it."

I don't say anything, but make a note that, once I have all the software building and deployable, Barry might not be long for our company. In the meantime, there are two ways I can handle this old dependency.

  1. Ensure it's in a folder under the solution or project, reference it there, and add it to source control.

2017-03-17_164300

  1. Create the dll as a NuGet package, and add it to my NuGet server.

Download nuget.exe and put in same folder as the dll (or put in its own folder and add to the system path variables).

Open a command prompt, change directories to where the dll is, and run nuget spec jamma.dll.

Edit the resulting damma.dll.nuspec file, change or remove elements as desired.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>jamma.dll</id>
    <version>3.2.52</version>
    <authors>Jamma Ltd</authors>
    <owners>Jamma Ltd</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>License file</description>
    <copyright>Copyright 2017</copyright>
  </metadata>
</package>

Important: Now move the dll into a subfolder named "lib"

Package the dll using nuget pack.

Add the package (jamma.dll and jamma.dll.nuspect) to your NuGet server however is appropriate, might just be copy/paste, or using nuget push commands. See the NuGet documentation.

Remove the reference from the project, and re-add from NuGet. Build and test. If everything's OK, delete the old jamma.dll file and folder.

Which would I do? Number 2, so that all my external dependencies are handled the same way (NuGet).

It's Building, so Add to CI

I test again by deleting and re-getting all the source code, open each and restore any NuGet packages, and build all the solutions. Everything builds, so I'm ready to configure TFS Build for continuous integration.

All my solutions are under the same team project. I'll need to be careful when I create my build definitions that I'm only checking out and building the solutions I want. The key to that is not saving the definition until its Repository and Triggers have been configured.

I'll create the first definition in detail, then just list settings for the remainders.

But first, the NuGet Package Source

If I haven't done it already, I'll add a machine-wide nuget configuration file that has my custom package source.

Create the file %ProgramData%\NuGet\Config\MySource.config with the source definition. In my case, I'm testing with a local NuGet server.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet Test Server" value="http://localhost:7275/" />
  </packageSources>
</configuration>

NameDb

Let's start at the top with the simple one, NameDb. In TFS, navigate to the correct collection and the Main team project. Then open the Build tab, start a new build and choose the Visual Studio template.

2017-03-25_143134

Create using the repository source Main Team Project.

2017-03-25_143854

Delete the Index, Copy and Publish steps. I can add those back if we want them. I don't need the NuGet installer step, either, but I'll configure it as an example since the other projects need it.

2017-03-25_144248

Set the NuGet Installer path to the NameDb solution file.

2017-03-25_145021

Set the Visual Studio Build path to the NameDb solution file.

2017-03-25_145758

Set the Visual Studio Test "Test Assembly" to the solution folder path. This is the physical folder where the agent will have pulled the files from TFS, the same as the LocalPath path we'll choose in Repository. The path definition means "Run any dll with "test" in its name under the build definition name, but not in the obj folders." Since our build definition will be Release, it'll run Release/NameDb.Tests.dll.

2017-03-25_150526

Do NOT Save!

Change to the Repository tab. This controls what source code gets checked out by the build. Change the mapped paths. We'll keep the cloaked Drops folder, even though we aren't publishing anything yet.

Notice the LocalPath is set. This is so the contents of the NameDb solution folder are placed under a _Shared\NameDb folder, just like in the repository. It's not strictly needed for this build, but remember how this works if you're building multiple solutions with project dependencies in relative folders.

2017-03-25_151640

Change to the Variables tab and ensure the BuildConfiguration is "release".

2017-03-25_152836

Change to the Triggers tab. This controls what starts a build. Check Continuous Integration and set the path to the NameDb folder. We only want to build if a file in NameDb changes.

2017-03-25_153129

Now Save the definition and give it a good name like "DbName".

2017-03-25_153325

Finally, queue the build and see if it succeeds!

Important
Check each step's output for what you expect. Especially, check that tests ran! The test step will success even if it doesn't find tests!

2017-03-25_153438

2017-03-25_155144

For the next solutions, I'll just list the settings for the build definitions. All of them use the Visual Studio template, and only keep the NuGet, Build and Test steps.

ReverseIt

BUILD TAB

  • NuGet Installer path to solution: $/Main/ReverseIt/ReverseIt.sln
  • Build solution path: $/Main/ReverseIt/ReverseIt.sln
  • Test assembly path: ReverseIt\**\$(BuildConfiguration)\*test*.dll;-:**\obj\**

REPOSITORY TAB

  • Map Server Path $/Main/ReverseIt to Local Path ReverseIt
  • Map Cloaked Path $/Main/ReverseIt/Drops

VARIABLES TAB

  • BuildConfiguration: release

TRIGGERS TAB

  • Include $/Main/ReverseIt

ReverseNames

Strictly speaking, I don't need to build RevEngine. Any changes I make will trigger ReverseIt to build, and if it fails then someone--hopefully I--will be notified. What I do need to do is get the source code into the correct relative folder, and install the NuGet packages. In short, I need to ensure RevEngine can be used by ReverseNames.

So, I'm going to have two sets of steps; one for RevEngine, and one for ReverseNames. It's a brittle definition: Whoever, works on RevEngine needs to know about this build, too, in case something needs to change.

BUILD TAB

  • NuGet Installer RevEngine

    • Path to solution: ReverseIt\**\*.csproj
    • NuGet Arguments: -PackagesDirectory ..\packages

    Notice I set the path to the local project files. This will restore packages for any .csproj file found. I also explicitly say where to put the packages folder relative to the .csproj file.

  • NuGet Installer ReverseNames

    • Path to solution: $/Main/ReverseNames/ReverseNames.sln
  • Build ReverseNames

    • Solution path: $/Main/ReverseNames/ReverseNames.sln
  • Test ReverseNames

    • Assembly path: ReverseNames\**\$(BuildConfiguration)\*test*.dll;-:**\obj\**

REPOSITORY TAB
This is critical. I'm telling the build exactly which project folders to pull from ReverseIt, i.e. RevEngine and RevEngine.Tests. This way I don't pull and build ReverseIt.csproj.

If I add a test project later, I'll need to add its mapped path here. Note that I removed the Drops cloaked path since I don't need it.

  • Map Server Path $/Main/ReverseIt/RevEngine to Local Path ReverseIt\RevEngine
  • Map Server Path $/Main/ReverseIt/RevEngine.Tests to Local Path ReverseIt\RevEngine.Tests
  • Map Server Path $/Main/ReverseNames to Local Path ReverseNames

VARIABLES TAB

  • BuildConfiguration: release

TRIGGERS TAB
I'm triggering the build if ReverseNames changes.

  • Include $/Main/ReverseNames

Here are some screenshots of the ReverseNames definition.

2017-03-26_190426

2017-03-26_190546

2017-03-26_190620

Improving the Solutions, Dependencies and Team Projects

What I've done so far works. Sort of. But it's not exactly ideal, especially if there were fifty solutions, not just three. One big thing I lose is the ability to maintain separate project boards and work items per project. To do that, I'd really like a separate Team Project per solution (or in some cases it could be multiple solutions).

The team projects might look like this.

$\NameDb
$\RevEngine
$\ReverseIt
$\ReverseNames

And then there's the project reference. The project reference is bound to cause headaches in the future. One developer will change RevEngine and silently break the ReverseNames build. I say "silently," because it could be something like adding a new unit test project that doesn't get run by ReverseNames because it doesn't get pulled from source control.

Because it's a shared dependency, RevEngine needs to be in its own solution under _Shared and published as a NuGet package.

Right about now, someone's saying, "But but but! I need to be able to step through that code! And make changes that I can test against ReverseIt!!"

This might point to too much coupling between the projects, but so what? That's what you need. For debugging,

If you really need to change code in the context of the solution,

  1. Drop the RevEngine NuGet reference from ReverseNames.
  2. Get the latest RevEngine code into its _Shared\RevEngine folder.
  3. Temporarily add the project reference to ReverseNames.
  4. Do the work.
  5. When finished, drop the RevEngine project reference.
  6. ReAdd the NuGet reference (which doesn't have your changes, sorry).
  7. Open the RevEngine solution and run the tests.
  8. Commit the RevEngine changes, which, sorry, need to be taken through QA, published, etc.
  9. When that's finished, update the RevEngine NuGet package in ReverseNames.
  10. Run the tests, commit, QA, etc.

In other words, you need to treat RevEngine as if it were some third party assembly like Entity Framework or NLog.

All of this leads to...

Key Thinking to Managing Dependencies

  1. Treat your dependencies as if they're third party.
  2. Shared dependencies need to be in their own solutions.
  3. What does it take to check out, build and test the solution on a new computer?
  4. How would you store the project(s) on GitHub or other public remote repository?

The Plan

I'm going to do just three things, but they'll make a big difference.

  1. Reorganize the solutions into discrete team projects
  2. Publish shared project references as NuGet packages
  3. Update projects to use the packages

Important
Before getting started in a production environment, I'd disable all of the affected TFS Build definitions. I don't want anything running if I don't need to.

I would also make a backup copy of all the affected source code, just in case something gets lost.

Reorganizing into Team Projects

First, I'll create my new team projects. Then I'll move my code.

Note
You can also add team projects using the Visual Studio Team Explorer.

  1. Open TFS in the browser, e.g. http://nesbit:8080/tfs. Or, if you know your collection's name, you can open it directly and skip to step 4. (e.g. http://nesbit:8080/tfs/CICollection3/_admin)

  2. Click the upper right hand corner settings "gear" icon to open the Manage Server page.

    2017-03-27_085328

  3. Select the Collection holding your team projects, and click "View the collection administration page".

    2017-03-27_085913

  4. Click New Team Project, enter the information and Create.

    2017-03-27_090645

Repeat to create the four team projects.

2017-03-27_091136

We can't move the code using the TFS web application, so

  1. Open Visual Studio

  2. Open Team Explorer and click the plug icon to Manage Connections

    2017-03-27_091617

  3. Double click the collection you're using to connect to it.

    2017-03-27_091812

  4. Open Source Control Explorer.

    2017-03-27_091911

  5. The new team project folders need to be mapped to local folders. This is kind of a pain, but with TFS Version Control there's no getting around it. It's easier with git. I would create a new folder named something like TempTeams to hold the new team projects, finish the moves, then delete all my source code mappings and start over. Like I said, a pain. Be very careful when doing all this that you don't accidentally delete source code from TFS you didn't want to.

  6. To map a team project folder, select it and click the Not Mapped link. Enter the destination folder, and when prompted Get the latest code (there won't be any, that's OK). Map all the team project folders.

    2017-03-27_093743

  7. Open the NameDb solution. TFS still doesn't natively allow moving multiple files/folders at once, so we need to move the project contents one a time. First, I'll move the solution file. Right-click, select Move, and enter the NameDb team project. The file will be moved to the NameDb team project.

    2017-03-27_092818

    2017-03-27_094508

  8. Move the NameDb and NameDb.Tests project folders the same way. You can right-click and move an entire folder, just not multiple folders.

  9. When finished, Commit the changes.

  10. You can now delete the NameDb folder from under $/Main/_Shared and commit that change.

Now I'm going to move just the RevEngine project folders to the new RevEngine team project. Later, I'll create their solution file.

Open the ReverseIt folder. Move the RevEngine and RevEngine.Test folders.

2017-03-27_095844

2017-03-27_100012

At this point, I move the remaining ReverseIt files/folders to their new team project. Likewise the ReverseNames solution.

Commit the changes. Delete the folders from $/Main, and commit that change, too.

Do NOT delete the $/Main team project! The TFS Build definitions would be deleted, too.!

Finally, go to the RevEngine project in your local working folder, i.e. ..TempTeams\RevEngine\RevEngine, open RevEngine.csproj. This will open the project in a solution, we just haven't saved the solution file yet.

Add the RevEngine.Tests project to the solution.

2017-03-27_101947

Now I have to be careful. I select the solution in Solution Explorer. Then, File > Save RevEngine.sln As.

2017-03-27_102153

In the Save As dialog, I navigate up one folder, so my solution file is at the root of RevEngine.

2017-03-27_102556

Now, I drag and drop the solution file into the Source Control Explorer's RevEngine team project.

2017-03-27_102848

Commit the change.

My projects are reorganized, and a couple will build (NameDb and RevEngine). Time to turn handle the RevEngine dependency.

Publish shared project references as NuGet packages

I'm still working in the TempTeams folder. I'll wait until everything's working before going back to my preferred folders.

Creating NuGet packages can be complex. For this walkthrough, I'm showing the simplest thing that works; I'm sure these steps are not ideal. The following assumes I have a local NuGet server at http://localhost:7275/nuget that doesn't require an API key for pushing packages (not recommended), and does allow pushing symbols.

  1. Open the solution, and edit the RevEngine Project Properties > Application > Assembly Information.

  2. Ensure Title, Description, Company and Product are filled in.

    2017-03-27_113329

  3. Save and build the solution. You must build, because nuget packs the built dll. It does not build the solution for you.

  4. Download the latest recommended NuGet.exe file.

  5. Put nuget.exe in the RevEngine project folder.

  6. Open a command prompt and change directory to the RevEngine project folder.

  7. Run nuget spec to create a RevEngine.nuspec file

  8. Edit RevEngine.nuspec and make these changes:

    <?xml version="1.0"?>
    <package >
      <metadata>
        <id>$id$</id>
        <version>$version$</version>
        <title>$title$</title>
        <authors>$author$</authors>
        <owners>$author$</owners>
        <description>$description$</description>
      </metadata>
    </package>
    
  9. Run nuget pack -Symbols to create the regular and symbols package. Remember that, in our case, we want a symbols package so that we can step through the assembly without using a project reference.

  10. Run nuget push *.nupkg -Source http://localhost:7275/api/v2/package. This will push both of the packages.

Update projects to use the packages

This one should be pretty easy. In any solution that has RevEngine as a project reference, remove the project and the reference, then install the NuGet package. Notice that jamma.dll is installed as well, because RevEngine depends on it and the RevEngine project was referencing the jamma.dll NuGet package when it was packages.

After updating, if I open ReverseIt (for example), put a breakpoint on this line,

Console.WriteLine(util.ReverseText(text));

then run the program, I can step into RevEngine.TextUtilities.cs, which is now part of the debugging symbols.

Update TFS Build Definitions

It's time to get our builds working again!

TFS doesn't natively support copying/moving build definitions. One solution is to write code using the TFS web API to clone definitions:

http://www.eidias.com/blog/2016/3/7/tfs-2015-cloneimportexport-build-definition-between-team-projects

However, there's a TFS extension for this, which really saves the day. You can download it here.

Export/Import Build Definitions

Note
If using TFS 2015, you must use version v0.0.2. Later versions only work with TFS 2017.

Follow the instructions to install the extension.

To install 'Export/Import Build Definition' (EIBD) on Team Foundation Server 2015 Update 2 and above, perform the following steps:

  1. Navigate to the Team Foundation Server Extensions page on your server. (For example, http://someserver:8080/tfs/_gallery/manage)
  2. Click Upload new extension and select the file you have just downloaded.
  3. After the extension has successfully uploaded, click Install and select the Team Project Collection to install into.

To move NameDb:

  1. In the Main team project Build tab, right-click the build definition and choose Export. Save the json file to a folder such as TfsBuildExports.

    2017-03-27_144558

  2. Change to the NameDb team project Build tab. EIBD has a known limitation: the Export/Import menu items can only be seen on a build definition name, not the "All build definitions" item. So, if necessary, create an empty definition and save it with a non-conflicting name.

  3. Right-click a definition and choose Import, selecting the .json file.

  4. Edit the imported definition.

  5. Make the following changes.

BUILD

  • NuGet Installer path: $/NameDb/NameDb.sln
  • Build path: $/NameDb/NameDb.sln
  • Test path: *$(BuildConfiguration)*test.dll;-:*\obj*

REPOSITORY

  • Map $/NameDb, leave Local Path empty

    2017-03-27_145831

TRIGGERS

  • Include $/NameDb
  1. Test!

ReverseIt

Use the same approach as NameDb, namely changing the paths in Build, Repository and Triggers. (In fact, ReverseIt would work with the default Visual Studio template.)

ReverseNames

Likewise, ReverseNames can be simplified because I no longer have the RevEngine project to deal with. In fact, all I have to do is delete anything related to RevEngine, then update the remaining paths as I've done above.

RevEngine

This is a new build definition, and it follows the same simplified pattern as above.

What Just Happened?

I'll tell you what. Our build definitions got simpler because

  1. We converted our project references to NuGet packages.
  2. We contained our code in team projects.

Admittedly, the sample was a pretty simple case. I could have a team project that legitimately encompasses multiple solutions. But if I still apply the key principles from above, I can have clean maintenance and simpler builds. As a bonus, it should be much easier to switch to git if I want, since I'm now treating my code as discrete instead of monolithic.

Clean Up

I can now delete the $/Main team project. But, despite there being a right-click menu item, I can't do it from Source Control Explorer. So, (sigh), back to the web interface and my collection administration page. Select Delete from the dropdown to the left of the team project.

2017-03-27_101321

Am I sure the team project is empty? If so, enter its name and delete it.

2017-03-27_101437

References

Creating a Package
Configuring NuGet Behavior
Using Private NuGet Repository
How to Get TFS2015 Build to Restore from Custom NuGet Source 1
How to Get TFS2015 Build to Restore from Custom NuGet Source 2
NuGet Package Source Config Locations Introducing NuGet 4 Specifying NuGet Config path for TFS Build

Next Part: TFS Continuous Integration Walk Through Part 5d - Multiple Solutions: Build Settings

Sane Database Schema Conventions

Rationale

These are sane conventions for constructing and naming a database schema. They aren't new, and there's sure to be something someone doesn't like. They are biased toward the .Net EntityFramework, which itself was influenced by the Ruby on Rails ActiveRecord conventions by David Heinemeier Hansson.

Using these conventions makes it easier to translate the tables into classes. While this isn't always desirable (or correct), it often is.

Sample

This sample schema exemplifies the conventions, and shows most relationships you'll encounter. It's semi-realistic. PropertyRecords is contrived to show a pseudo one-to-one relationship. (Note: MS SQL doesn't allow a true one-to-one structure, as different tables' rows can't be simultaneously created.)

 

sample-schema

  • A Customer has an Initial contact Employee, Support Employee and a Salesperson. An Employee can service multiple Customers.
  • An Employee can be a Salesperson.
  • A Customer has one or more Addresses, an Address belongs to one Customer.
  • An Address has one Property Record, a Property Record is for one Address.
  • A Customer has zero or more Orders, an Order has one Customer.
  • An Order can have many Vendors, a Vendor can fulfill many Orders.
  • An Order can have many Promotions, a Promotion is for zero or more Orders.
  • An Order Promotion has zero or more customer notifications.

Table Column Layout

I like my table columns ordered this way.

  1. Primary key
  2. Foreign keys
  3. Regular columns
  4. Audit columns

General Names

Pluralize Most Table Names

Pluralizing table names reduces the chances of keyword conflicts, and matches how the table will (typically) be treated in an ORM tool.

Yes            |No               
---------------|-----------------
Customers      |Customer
Salespeople    |Salesperson

Don't pluralize many-to-many join tables. By convention, keep the table name parts alphabetical.

Note
Using Code First, Entity Framework might pluralize this to OrderVendors. Personally, I'd use the FluentAPI to force the table name to the OrderVendor.

Yes            |No               
---------------|-----------------
OrderVendor    |OrdersVendors, OrderVendors, VendorOrder              

Use PascalCase

Tables and columns should be in PascalCase.

Yes            |No               
---------------|-----------------
Customers      |customer
OrderNbr       |orderNbr

No Dashes or Underscores

It's tempting to separate words in either table or column names, but don't. Keeping them PascalCased makes the transition to classes easier and clearer. Also, some databases don't play nicely with underscores, or dashes, depending on how they're used.

Yes            |No               
---------------|-----------------
OrderVendor    |Order_Vendor
OrderNbr       |order-number
CustomerId     |Customer_ID

Keys

My personal preference is to end key names with "Id", rather than "ID". It reads just as well, and is consistent with PascalCasing and .Net naming conventions.

Primary Key

Some people prefer a primary key of just "Id", but if you need to run SQL queries (and you will), it's easier to have the table name in the primary key for creating joins and reading the results.

SELECT  c.CustomerId, c.Name, a.AddressId, a.Address1
FROM    Customers c
        JOIN Addresses a on c.CustomerId = a.CustomerId

--Result:

CustomerId  Name  AddressId Address1
----------  ----  --------- ----------
        23  Ron         402 12 Main St
        47  Eve          11 3 Polo Ave        

Simple Tables
TableName + "Id"

Yes            |No               
---------------|-----------------
CustomerId     |CustomerID, Customer_id, Customer_ID

Many-to-Many Tables
A regular join table doesn't need its own primary key. Just use the other tables' primary keys to form a composite key.

OrderVendor
===========
OrderId  (PK, FK)
VendorId (PK, FK)

The resulting classes should have these properties

class Order 
{
  IEnumerable<Vendor> Vendors
}

class Vendor
{
  IEnumerable<Order> Orders
}

A join table with payload--one that has its own columns and/or will be joined to another table, should have its own primary key of Table1+Table2+Id

OrderPromotion
==============
OrderPromotionId (PK)
OrderId          (FK)
PromotionId      (FK)

The resulting classes

class Order
{
  IEnumerable<OrderPromotion> OrderPromotions
  //You can manually add this method to get Promotions
  IEnumerable<Promotion> Promotions
  {get {return OrderPromotions.Select(op => op.Promotion);}}
}

class Promotion
{
  IEnumerable<OrderPromotion> Order Promotions
}

Foreign Key

When possible, use the referenced primary key name. If there are multiple foreign keys to the same table, end with the foreign primary key name.

Yes                       No               
-------------             -----------------
Employees
=========
EmployeeId (PK)

Customers                 Customers
=========                 =========
CustomerId (PK)           Customer_Id (PK)
SupportEmployeeId (FK)    SupportPerson (FK)
InitialEmployeeId (FK)    InitialEmp_ID (FK)

Addresses                 Addresses
=========                 =========
AddressId  (PK)           AddressId (PK)
CustomerId (FK)           CustID    (FK)

Date Columns

Use a verb, and end the date or datetime column names with "On".

Following this convention often leads to clearer column meanings, and consistency. For example, DateToPlace or PlaceDateTime becomes PlaceOrderOn.

Yes            |No               
---------------|-----------------
OrderedOn      |OrderDateTime
ShouldShipOn   |DateShipExpect, ExpectedShip_DT, AnticipatedDate

Don't Abbreviate

Abbreviations are, by their nature, ambiguous and culture-centric. Avoid them unless they are very common, consistent, and/or well-known in the organization or industry.

A good example of ambiguity is how to abbreviate "number". Even my schema example, "InvoiceNbr", is potentially ambiguous. But InvNo is worse. Is there another column "InvYes"?

Yes            |No               
---------------|-----------------
Customers      |Custs
CustomerId     |CustId
FirstName      |Fname
InvoiceNbr     |InvNo

Use Consistent Names (and Abbreviations if You Must)

  • If you must abbreviate, be consistent.
  • If it's spelled "InvoiceNbr" in one table, it's that way in all tables.
  • If everyone knows what DestinationBOL means, that might be OK. But maybe it's better to expand it to DestinationBillOfLading.

When in doubt, refer to Flatt's Law #6: Clarity is more important than brevity.

Yes            |No               
---------------|-----------------
InvoiceNbr     |InvoiceNum, InvNumber, Inv_Nmbr

Auditing

To be honest, I've often found audit columns to be more trouble than they're worth. I think if auditing is needed, it's better to have a separate audit history table where you can record many kinds of changes, including deletions.

But, if I am using them, and am tracking who took an action, I don't link to another table (such as Users or Employees), but instead record the physical name. This significantly reduces linking, and makes it easy to indicate that a process (rather than a person) performed an action. In other words, UpdatedBy is a string column and contains a value like "cflatt" or "Nightly Batch Process".

  • CreatedOn
  • CreatedBy
  • UpdatedOn
  • UpdatedBy

Inheritance

Inheritance can be modeled in the database a couple of different ways. I prefer Table Per Type, as shown by the Employee and Salesperson tables. Note that the Salesperson table has an EmployeeId primary key. This is what implies the inheritance. In the application's class model, these would become:

public class Employee
{
  int EmployeeId
  string Name 
}

public class Salesperson: Employee 
{
  double CommissionPercent 
}

public class Customer
{
  int CustomerId
  string Name
  Employee SupportEmployee
  Employee InitialEmployee
  Salesperson Salesperson
}