Pages

Sunday, June 25, 2017

Factory Resetting Android Without Losing Your Life (Metaphorically)

6505777355_dd6d1128fc_z

(photo: Antony Hell)

Sometimes you have to call it quits and admit your phone just isn't working right, and no number of help group suggestions is going to fix it. But, maybe, a factory reset will. Maybe you should just start over.

And yet, you don't really want to start over. You want your email, and your pictures. You want your apps, even if you have to reinstall them. You definitely do not want to wipe your phone only to discover you deleted something super-critical.

This guide can't guarantee you won't lose something important. There are too many different apps, settings, and places to hide. But it should give you a better chance, maybe make you think of something you hadn't before.

These instructions are for a Windows user. Sorry, Mac people. Also, my phone is a Nexus 5 running Android 7 (Nougat). Your settings and dialogs may have different labels.

Basically

  1. Get your account passwords ready
  2. Clean up your installed app lists
  3. Backup all your data
  4. Factory Reset
  5. Setup
  6. Test

Preparation

Set aside plenty of time for this. Make it easy on yourself by having:

  • Your phone's power adapter and cable.
  • A USB cable to attach to your computer.
  • WiFi access, otherwise the data usage will eat you up.
  • Fully charged your phone.

WiFi and Cell Service

You shouldn't have to do anything to restore your cell service after the factory reset, That information is tied to your SIM card, or otherwise stored in a non-volatile way. But, I can't guarantee you won't need to enter some kind of information required by your carrier.

If you have WiFi, you'll need your WiFi password, and you'll need to know your wireless network name.

Apps

Cleanup

By default, Google remembers every application you ever installed from the Play store, even if you uninstall it. After factory resetting, and entering your Google account, there's an option to restore all your installed apps. You may want to do that, or you may want to reinstall from scratch. I like doing the latter, because

  1. I only install what I really need, and
  2. I don't know if a previously installed app is causing my problems.

So, I want my application history to only show what's actually on my phone. Thankfully, the latest versions of the Play store make this easier that before. To clean up your apps:

  1. Open the Play Store app.
    Screenshot_20170625-134256
  2. Switch to the Library tab. This is all the applications you've installed in the past, but are not currently installed.

    Screenshot_20170625-134305
  3. Tap the X to delete applications that you don't plan on installing again. Be ruthless here. If you can delete them all, do it, because after the factory reset this tab should suddenly contain the applications you currently have installed. The goal is not to get confused when you manually reinstall apps.

Important

You cannot do this cleanup from the Google Play web site. You can only do it from your phone. So do it before you reset your phone.

Review

Now open the Installed tab.

Screenshot_20170625-134923

This is a good time to review your applications and ask a few questions:

  1. Do I know this app's user/password?
  2. Does this app store data that I want to keep on my phone?
  3. Does the app have settings that I don't want to forget about, that were a pain to get working?

Other Apps

Did you install any apps from sources other than the Play Store? One likely culprit: Amazon, specifically their Amazon Underground and Music apps.

Passwords

Do you have applications that require a password, but you never have to enter one? Make sure you create a document with those passwords and usernames. Double check what's in your document by logging off from the app and reentering the app's username and password from the document. If you have a password manager, make sure you know and test the master password. Here are some apps might be auto-logging you in.

  • Your Google account. You probably never enter your Google password on your phone.
  • Other accounts. Look in Settings > Accounts
  • Microsoft Office apps (Word, Excel, etc)
  • Facebook, Twitter, Snapchat...all social media
  • Amazon Kindle and shopping
  • Netflix, Hulu, etc.
  • Your banking app

The good news is, you can almost always recover your password by going to the application's web site.

Phone Backup

There are backup programs you can use to backup your entire phone, and then restore it later. Unfortunately, I can't recommend anything because never use those. Sorry. But here's a recent web article about backup apps.

10 best Android backup apps and other ways to backup Android!

The advantage to a full backup app is, if something goes wrong, you should be able to put your phone back the way it was. But don't rely on that. Take other steps, below, to protect your data.

App Data

Your phone's apps aren't your life. Your primary goal is to be able to recreate your data.

Your data is your life.

First, here's a quick tutorial on connecting your phone to your computer and seeing its files. Most of you know how to do this, but it's worth documenting here.

  1. Plug your phone into your computer using a USB cable.
  2. Your phone should be recognized. You might be prompted how to use the files. Choose "Transfer files" Don't choose to just transfer photos (PTP or MTP). We want access to as many files as possible.
  3. You may need to enable data transfer from your phone. Swipe down and open "USB charging - Tap for more options", then choose Transfer Files
    Screenshot_20170625-141309        Screenshot_20170625-141325
  4. On your PC, open the File Explorer, expand This PC, and find your phone's name.
    2017-06-25_141403
  5. Select that folder, open Internal shared storage, and all you'll have access to your phone's files (most of them).
    2017-06-25_141839

You can recreate your data in these ways:

  1. Keeping it stored in the cloud. That's the way it works for Google, Facebook and Twitter. In general, if you can access your account and data from both the web and your phone, the data's in the cloud. These are easy.
  2. Exporting it in a file from the phone to your computer, and Importing that file later. Also called Backup and Restore.
  3. Copying it from the phone's application or document folders to your computer, then copying it back later.
  4. Recreating it by hand. You're trying to avoid this.

Cloud

Many applications store your data in places that are hard to get to. Often they store the data in the cloud, which is great because all you have to do is log in and it'll be there.

Backup/Restore or Export/Import

Others store the data on your phone. For example, I have a nice time tracking application, Timesheet - Time Tracker, that stores backups on the phone. It has a paid cloud sync feature, but I don't use it. Checking the settings, I see that I can also backup and restore files from my Google Drive, so I do that, verify the backup file is there, and test restoring the backup.

Backups are nice. But it's the restore that counts. Test your backups and exports.

If you're prompted where to export to, I recommend two places: your Google Drive in a folder named Phone, and/or your phone's Documents folder. In either case, remember to name the file so you'll know what it's for later, or put it in a well-named folder.

If you export to your phone, you'll see later how to copy those files to your computer.

Copying

A thankfully-very-few applications store their data but don't provide a way to export it. That's a hard problem. If you're not very phone/tech savvy, you're probably stuck recreating it by hand. If you do know your way around your phone's file system innards, then try a couple of things:

  1. Try to find the data in the Android folder, either in the data or media folder. Unfortunately, the data files are sometimes. Windows Search sometimes works, if you look for known file names.
  2. Advanced users can install and use an Android file explorer to try to access hidden files, then copy those files to a standard phone folder such as Documents.

Personal Data

A lot of your application data is personal data, but there's a big chunk of stuff that might be on your phone and nowhere else. That chunk is likely to be:

  • Pictures
  • Downloads, such as PDFs or videos
  • Anything you copied or moved to your phone from your computer

Copying this data is pretty easy.

  1. Connect your phone to your computer and open its files (see above).
  2. Create a folder on your Desktop or Documents called Phone and open it.
  3. Copy everything from your phone to that folder.

DO THIS!

This may take a while. In my case, it was only about two minutes. But if it takes longer, that's OK. You'll have captured everything that you possibly can. Do this after you've exported app data as shown above.

Looking at the folders, you might wonder where your camera pictures are. They're in the DCIM folder.

Breathe and Continue

At this point, you should have captured any data that will be lost from your phone. Now, stop. Walk away for five minutes. Come back. And review. Really, do this, because you'll probably think of something you missed. That five minute break may save you hours later.

Factory Reset

This is pretty easy.

  1. On your phone, open Settings > Backup & reset > Factory data reset
  2. You'll get a nice warning about losing data, and which accounts you're signed in to. Do you have usernames and passwords to these?
    Screenshot_20170625-144559    Screenshot_20170625-144605
  3. When you're ready, choose Reset Phone.

Android will wipe your phone and reinstall the current factory version of the operating system.

Factory reset does not mean "like it was when I bought the phone." You'll get the latest version of the Android operating system and updates that you had installed.

Setup

After the reset, you'll go through the initial setup. This will include:

  1. Reconnecting to your cell carrier, which should happen automatically.
  2. Connecting to your WiFi, if you're using it.
  3. Entering your Google account information. Be careful here. You'll be prompted to reinstall all your apps. Think carefully whether you want to do this or install them manually.

After your account is set up, you'll go through the tedious process of reinstalling your applications from the Play Store. But I think you'll find that you can prioritize to just the apps you need to survive and be productive (especially if you use your phone for work).

The biggest hassle is, frankly, all the little settings you didn't know you had, getting your phone working just the way you like it.

Don't Rush

Are you doing a factory reset because of phone trouble? If so, don't rush to reinstall your apps. After the initial setup, before reinstalling anything, check if the problem has gone away? If so, great!

Now install the apps one at a time, and check for that problem after each installation. This might take hours, or days, but you might find an offending application and save yourself lots of headaches later.

You might just save your mind.

Monday, May 8, 2017

IdentityServer3 with PKCE Part 4 - Persisting User Data

This series simulates a native application accessing a protected Web API resource, using OAuth2 via IdentityServer3. It demonstrates using Proof Key for Code Exchange (PKCE), and is in four parts:

  1. Build a simple authorization server, consumed by native application.
  2. Build a protected resource.
  3. Persist server configuration to database.
  4. Persist user data to database using Microsoft.Identity and SQL Server.

Important
This series does not create an OpenID Connect (OIDC) server. It is OAuth-only, since the PKCE specification doesn't require OIDC.

Where We're At

We're persisting IdentityServer configuration data, but still using in-memory users. IdentityServer can be extended to validate against an identity store, and supports Microsoft's AspNet.Identity via Entity Framework on SQL Server.

Important IdentityServer is not intended or used to add, edit or delete users. They make that clear. The server's job is only to verify identity. In this sample, there's a klunky method for populating the user database, and there are no API methods for registering a user. The API could be added using controller code similar to what's in the default ASP.NET Web API templates.

AspNetIdentity

References
IdentityServer3.AspNetIdentity Sample
Identity Server 3 using ASP.NET Identity
OAuth2 Walkthrough using Identity Server and ASP.NET

Install these packages.

Install-Package IdentityServer3.AspNetIdentity -Project AuthAndApi
Install-Package Microsoft.AspNet.Identity.EntityFramework -Version 2.2.1 -Project AuthAndApi

Add a new connection string to web.config, for the AspNet Identity database.

    <add name="IdUsersDb" connectionString="Server=(localDb)\MSSQLLocalDb;AttachDbFilename=|DataDirectory|IdUsers.mdb;Database=IdUsers-ncacpkce;Integrated Security=true;" providerName="System.Data.SqlClient" />

Update Factory.cs

//Add these usings
using Microsoft.AspNet.Identity.EntityFramework;
using Microsoft.AspNet.Identity;
using IdentityServer3.AspNetIdentity;
using IdentityServer3.Core.Services;
using IdentityServer3.Core.Services.InMemory;

Change the Configure method, adding code to persist the users, and a new method to populate the database.

        public static IdentityServerServiceFactory Configure(string optionsConnectionString, string usersConnectionString)
        {
            //Configure persisting IdentityServer operational services
            //The database will be created if needed. The default schema is being used.
            var efConfig = new EntityFrameworkServiceOptions
            {
                ConnectionString = optionsConnectionString
            };

            // these two calls just pre-populate the test DB from the in-memory config.
            // clf note: there's probably a better way to do this for a production app.
            ConfigureClients(Clients.Get(), efConfig);
            ConfigureScopes(Scopes.Get(), efConfig);

            var factory = new IdentityServerServiceFactory();
            factory.RegisterConfigurationServices(efConfig);
            factory.RegisterOperationalServices(efConfig);
            factory.ConfigureClientStoreCache();
            factory.ConfigureScopeStoreCache();
            //persist users
            //populate the database. This creates the database, if needed.
            ConfigureUsers(Users.Get(), usersConnectionString);
            //configure IdentityServer to use the database
            factory.UserService = new Registration<IUserService>(UserServiceFactory.Create(usersConnectionString));

            //configure cleanup for expired data
            var cleanup = new TokenCleanup(efConfig); //by default every 60 seconds

            return factory;
        }

        public static void ConfigureUsers(IEnumerable<InMemoryUser> users, string connectionString)
        {
            //create the Identity UserManager
            var context = new IdentityDbContext(connectionString);
            var userStore = new UserStore<IdentityUser>(context);
            var userManager = new UserManager<IdentityUser>(userStore);
            foreach (var user in users)
            {
                //no error checking!
                //only add if doesn't exist
                var test = userManager.FindByName(user.Username);
                if (test != null) { continue; }
                userManager.Create(new IdentityUser(user.Username), user.Password);
                test = userManager.FindByName(user.Username);
                foreach (var claim in user.Claims)
                {
                    if (claim.Type.ToLower() == "email") { test.Email = claim.Value; }
                    userManager.AddClaim(test.Id, claim);
                }
                userManager.Update(test);
            }
        }

And finally, add a new factory class, which is used in the line factory.UserServce =.

   public static class UserServiceFactory
    {
        public static AspNetIdentityUserService<IdentityUser, string> Create(string connectionString)
        {
            var context = new IdentityDbContext(connectionString);
            var userStore = new UserStore<IdentityUser>(context);
            var userManager = new UserManager<IdentityUser>(userStore);
            return new AspNetIdentityUserService<IdentityUser, string>(userManager);
        }
    }

That's it! The Startup class doesn't change. Running the solution creates the new database, and the authentication works as before.

2017-05-06_165055

Wrap Up

In this series, I demonstrated authorizing a mobile application in a secure way. This was necessarily a simple sample. Some natural next steps and questions to answer would be:

  1. How to customize the IdentityServer login and authorization screens
  2. Configure for OpenID Connect3
  3. Show using Google authentication

IdentityServer3 with PKCE Part 3 - Persist IdentityServer Configuration

This series simulates a native application accessing a protected Web API resource, using OAuth2 via IdentityServer3. It demonstrates using Proof Key for Code Exchange (PKCE), and is in four parts:

  1. Build a simple authorization server, consumed by native application.
  2. Build a protected resource.
  3. Persist server configuration to database.
  4. Persist user data to database using Microsoft.Identity and SQL Server.

Important
This series does not create an OpenID Connect (OIDC) server. It is OAuth-only, since the PKCE specification doesn't require OIDC.

Where We're At

Until now, we've been using in-memory configuration data (clients and scopes). But we'd like a way to persist the configuration to a database.

IdentityServer and Entity Framework

References

Entity Framework support for Clients, Scopes, and Operational Data
Operational Data
Clients and Scopes
Caching results for client, scope, and user stores

Add these packages.

Install-Package IdentityServer3.EntityFramework -Project AuthAndApi

Add the App_Data folder. This is required if you want to use the connection string below.

Add a connection string for the configuration database. The sample uses SQL localDb 2016.

<connectionStrings>
    <add name="IdConfigDb" 
         connectionString="Server=(localDb)\MSSQLLocalDb;AttachDbFilename=|DataDirectory|IdConfig.mdb;Database=IdConfig;Integrated Security=true;" 
         providerName="System.Data.SqlClient" />
  </connectionStrings>

Add an IdentityServerConfiguration folder, and a Factory.cs class. The factory class takes care of setting the configuration database source, and also initially populating the database using our in-memory client and scope collections.

Note
This sample doesn't provide a way to edit configuration data. There is a NuGet package that adds web-based configuration administration to a project. I haven't tried it, but here's the link. IdentityServer3.Admin

Factory.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Configuration;
using IdentityServer3.EntityFramework;
using IdentityServer3.Core.Models;
using AuthAndApi.IdOptions;
using AuthAndApi.IdUsers;

namespace AuthAndApi.IdentityServerConfiguration
{
    public class Factory
    {
        public static IdentityServerServiceFactory Configure(string optionsConnectionString)
        {
            //Configure persisting IdentityServer operational services
            //The database will be created if needed. The default schema is being used.
            var efConfig = new EntityFrameworkServiceOptions
            {
                ConnectionString = optionsConnectionString
            };

            // these two calls just pre-populate the test DB from the in-memory config.
            // clf note: there's probably a better way to do this for a production app.
            ConfigureClients(Clients.Get(), efConfig);
            ConfigureScopes(Scopes.Get(), efConfig);

            var factory = new IdentityServerServiceFactory();
            factory.RegisterConfigurationServices(efConfig);
            factory.RegisterOperationalServices(efConfig);
            factory.ConfigureClientStoreCache();
            factory.ConfigureScopeStoreCache();
            //Still using in-memory users for now
            factory.UseInMemoryUsers(Users.Get());

            return factory;
        }

        public static void ConfigureClients(IEnumerable<Client> clients, EntityFrameworkServiceOptions options)
        {
            using (var db = new ClientConfigurationDbContext(options.ConnectionString, options.Schema))
            {
                if (!db.Clients.Any())
                {
                    foreach (var c in clients)
                    {
                        var e = c.ToEntity();
                        db.Clients.Add(e);
                    }
                    db.SaveChanges();
                }
            }
        }

        public static void ConfigureScopes(IEnumerable<Scope> scopes, EntityFrameworkServiceOptions options)
        {
            using (var db = new ScopeConfigurationDbContext(options.ConnectionString, options.Schema))
            {
                if (!db.Scopes.Any())
                {
                    foreach (var s in scopes)
                    {
                        var e = s.ToEntity();
                        db.Scopes.Add(e);
                    }
                    db.SaveChanges();
                }
            }
        }
    }
}

Update Startup.cs, replacing the previous IdentityServer configuration that explicitly configured the Factory with in-memory data, with the new call to Factory.Configure.

            //Configure IdentityServer for issuing authentication tokens
            var options = new IdentityServerOptions
            {
                Factory = Factory.Configure("IdConfigDb"),
                //SSL MUST be used in production
#if DEBUG
                RequireSsl = false
#endif
            };
            app.UseIdentityServer(options);

ASIDE!

Running the app at this point, I got an error in Startup.cs that I was missing a dependency.

 

2017-05-06_134432

When I installed the package IdentityServer3.EntityFramework, Newtonsoft.Json version 8 was uninstalled, because the package requires version 9 or greater. The package installer output confirms this.

Successfully uninstalled 'Newtonsoft.Json.8.0.3' from AuthAndApi
Successfully installed 'Newtonsoft.Json 9.0.1' to AuthAndApi

No package requires version 8 or lower, so what gives? For some reason, the solution's web.config assembly binding section wasn't updated properly. It still had this:

     <dependentAssembly>
       <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
       <bindingRedirect oldVersion="0.0.0.0-8.0.0.0" newVersion="8.0.0.0" />
     </dependentAssembly>

You could manually change the max version number to 9.0.0.0, reinstall the Newtonsoft package which will do it for you.

Update-Package -Reinstall Newtonsoft.Json -Project AuthAndApi

When I run the application, it behaves exactly as before. Checking in the App_Data folder, I see my database was created.

2017-05-06_140602

One difference, though. After the first time running the app, I no longer was prompted to authorize the scope. This might be related to these factory settings:

factory.ConfigureClientStoreCache();
factory.ConfigureScopeStoreCache();

Wrap Up

Our server's configuration is now being persisted to a database. The only thing (for this sample!) left is to persist the user's credentials and claims.

IdentityServer3 with PKCE Part 2 - Protected Resource Server

This series simulates a native application accessing a protected Web API resource, using OAuth2 via IdentityServer3. It demonstrates using Proof Key for Code Exchange (PKCE), and is in four parts:

  1. Build a simple authorization server, consumed by native application.
  2. Build a protected resource.
  3. Persist server configuration to database.
  4. Persist user data to database using Microsoft.Identity and SQL Server.

Important
This series does not create an OpenID Connect (OIDC) server. It is OAuth-only, since the PKCE specification doesn't require OIDC.

Where We're At

We can get a token, but there's nothing to use it on. We've configured IdentityServer with information about our client application (PretendMobile), and we used that information (Client ID, etc) to get a token.

Now we'll add a Web API with a protected method that just returns some claim data. It will be configured to:

  1. Accept the access token our pretend mobile app gets from the authorization server.
  2. Validate that token against the authorization server.

We'll use the same project as the authorization server, but it's important to know that the Web API could be in a completely separate project.

Web API

References
Simplest OAuth Server
Simplest OAuth2 Walkthrough Code

Install the packages for Web API, and being able to accept the IdentityServer tokens. Note IdentityModel version, which is latest for .Net 4.6 (2.x is for .Net Core).

Install-Package Microsoft.AspNet.WebApi.Client -Version 5.2.3 -Project AuthAndApi
Install-Package Microsoft.AspNet.WebApi.Owin -Version 5.2.3 -Project AuthAndApi
Install-Package IdentityModel -Version 1.9.2 -Project AuthAndApi
Install-Package IdentityServer3.AccessTokenValidation -Version 2.15.0 -Project AuthAndApi

Add two things to the OWIN pipeline: accepting access tokens, and using WebApi.

Updated Startup.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Configuration;
using Owin;
using AuthAndApi.IdOptions;
using AuthAndApi.IdUsers;
using System.Web.Http;
using IdentityServer3.AccessTokenValidation;

namespace AuthAndApi
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            //Configure IdentityServer for issuing authentication tokens
            var options = new IdentityServerOptions
            {
                Factory = new IdentityServerServiceFactory()
                .UseInMemoryClients(Clients.Get())
                .UseInMemoryScopes(Scopes.Get())
                .UseInMemoryUsers(Users.Get()),
                //SSL MUST be used in production
#if DEBUG
                RequireSsl = false
#endif
            };
            app.UseIdentityServer(options);

            // Configure Web API to accept access tokens from IdentityServer, and require a scope of 'email'
            app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
            {
                //In this case, the authorization server is also the Web API server
                Authority = "http://localhost:27230",
                ValidationMode = ValidationMode.ValidationEndpoint,
                RequiredScopes = new[] { "email" }
            });

            //Configure Web API
            var config = new HttpConfiguration();
            config.MapHttpAttributeRoutes();
            app.UseWebApi(config);
        }
    }
}

Add a Controllers folder, and a controller named Test.

2017-05-06_1140242017-05-06_1141442017-05-06_114210

Create an API method that requires authorization. The method checks for evidence a user authorized (and not just the application).

Note
Code shamelessly stolen from the IdentityServer samples

TestController.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
//Added
using System.Security.Claims;

namespace AuthAndApi.Controllers
{
    [Route("test")]
    [Authorize]
    public class TestController : ApiController
    {
        public IHttpActionResult Get()
        {
            var user = User as ClaimsPrincipal;

            var subjectClaim = user.FindFirst("sub");
            if (subjectClaim != null)
            {
                return Json(new
                {
                    message = "OK user",
                    client = user.FindFirst("client_id").Value,
                    subject = subjectClaim.Value,
                    email = user.FindFirst("email").Value
                });
            }
            else
            {
                return Json(new
                {
                    message = "User claim not found for this client_id",
                    client = user.FindFirst("client_id").Value
                });
            }
        }
    }
}

The API server now has a protected resource. Let's prove it requires authorization. Run the application. The console will still get an access token, but we're not doing anything with it, yet. Leave the project running and manually enter this URL into a browser to try to access the protected resource.

http://localhost:27230/test

The server will return something like this:

<Error>
  <Message>Authorization has been denied for this request.</Message>
</Error>

Pretend Mobile App - Access Protected Resource

In the PretendMobile project, add and call a method that access the protected Web API resource. The token is added to the request in the Authorization header.

        static void Main(string[] args)
        {
            //get token
            var response = GetAccessToken();
            Console.WriteLine("Access Token: " + response.AccessToken);
            //use token
            CallProtectedApiResource(response.AccessToken);
            Console.ReadLine();
        }

        static void CallProtectedApiResource(string access_token)
        {
            var client = new HttpClient();
            client.SetBearerToken(access_token);
            Console.WriteLine(client.GetStringAsync("http://localhost:27230/test").Result);
        }

Run the solution. After authorizing, the application successfully gets the protected resource's data.

2017-05-06_121035

Wrap Up

We now have a fully functioning mobile application! Next up, persisting the IdentityServer configuration.

IdentityServer3 with PKCE Part 1 - Simple OAuth2 Server

This series simulates a native application accessing a protected Web API resource, using OAuth2 via IdentityServer3. It demonstrates using Proof Key for Code Exchange (PKCE), and is in four parts:

  1. Build a simple authorization server, consumed by native application.
  2. Build a protected resource.
  3. Persist server configuration to database.
  4. Persist user data to database using Microsoft.Identity and SQL Server.

Important
This series does not create an OpenID Connect (OIDC) server. It is OAuth-only, since the PKCE specification doesn't require OIDC.

Overview - Why PKCE?

References
A Guide to OAuth 2.0 Grants Proof Key for Code Exchange

Mobile (i.e. "native") applications are difficult when it comes to authorizing users. The reasons are: they are long-lived, and can't keep a secret.

In the descriptions below, "two-step" means calling the authorization endpoint then the token endpoint, "one-step" means calling just the token endpoint, "trusted" means the client can keep a secret key, "short-lived" means only an access token, and "long-lived" means an access+refresh token.

The standard OAuth 2 grants are:

  • Authorization code grant (two-step, trusted, long-lived)
  • Implicit grant (one-step, short-lived, meant for javascript-only browser apps)
  • Resource owner credentials grant (one-step, trusted, long-lived, for passing user/password)
  • Client credentials grant (one-step, trusted, short-lived, meant for app access, not users)
  • Refresh token grant (one-step, requires access token)

None of these fits well for a mobile app, which is untrusted but long-lived. The solution is Proof Key for Code Exchange (PKCE). PKCE generates a temporary secret string and a way to verify that string. It sends the secret to the authorization endpoint, which stores it, then sends the validator to the token endpoint, which verifies the stored secret. This mitigates the threat of another application capturing the authorization code; without the secret/validator, the auth code is useless.

Authorization Server - Simplistic

References
Simplest OAuth Server
Simplest OAuth2 Walkthrough Code

We'll create a simple OAuth2 server using IdentityServer3. We won't even create a Web API, so we can see just the server in operation. In other words, we'll be able to get tokens, but have nothing to use them with.

Create a new Empty web application project named AuthAndApi

2017-05-03_135031

Install packages for the authentication server. This assumes using IIS for the web host.

Install-Package IdentityServer3 -Project AuthAndApi
Install-Package Microsoft.Owin.Host.Systemweb -Version 3.1.0 -Project AuthAndApi

All the IdentityServer documents say RAMFAR should be enabled in web.config.

  <system.webServer>
    <modules runAllManagedModulesForAllRequests="true" />
  </system.webServer>

Add Folders for IdOptions and IdUsers. These represent the database storage we'll configure later.

In IdOptions folder, add these classes.

Scopes.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Models;

namespace AuthAndApi.IdOptions
{
    public class Scopes
    {
        public static List<Scope> Get()
        {            
            return new List<Scope>
            {
                //Create the email scope for OAuth. Later, when using OpenID Connect, return StandardScopes.Email instead
                new Scope() { Name = "email", DisplayName = "Email", Claims = new List<ScopeClaim>() { new ScopeClaim("email", true) } }
            };
        }
    }
}

Clients.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Models;

namespace AuthAndApi.IdOptions
{
    public class Clients
    {
        public static List<Client> Get()
        {
            return new List<Client>()
            {
                //A client is an application configured to request tokens.
                //The resource, such as Web API, is configured to accept these tokens.
                new Client()
                {
                    ClientName = "Pretend Android Mobile App",
                    ClientId = "a065ae9f-0d02-45fa-85b6-4dc93e2ad5ef",
                    Enabled = true,
                    //About Reference vs JWT tokens
                    //https://leastprivilege.com/2015/11/25/reference-tokens-and-introspection/
                    AccessTokenType = AccessTokenType.Reference,
                    Flow = Flows.AuthorizationCodeWithProofKey,
                    AllowedScopes = new List<string>
                    {
                        "email"
                    },
                    //URIs the authorization code is allowed to be redirected to
                    RedirectUris = new List<string>()
                    {
                        "http://localhost:19191/"
                    },
                    //Shouldn't be required for this flow, but is
                    ClientSecrets = new List<Secret>()
                    {
                        new Secret("e9711973-edd7-496f-b415-b10ad0667305".Sha256())
                    }
                }
            };
        }
    }
}

In IdUsers folder:

Users.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Services.InMemory;
using System.Security.Claims;
namespace AuthAndApi.IdUsers
{
    public class Users
    {
        public static List<InMemoryUser> Get()
        {
            return new List<InMemoryUser>
            {
                new InMemoryUser
                {
                    //Subject is identifier? Could be external identity,
                    //so it's not our own database primary key
                    Subject = "1",
                    Username = "alice",
                    Password = "secret",
                    Claims = new List<Claim>()
                    {
                        new Claim("email", "alice@example.com")
                    }
                }
            };
        }
    }
}

Add Startup.cs. This is the class that builds the OWIN pipeline, so the order in which features are added is important.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
//Added
using IdentityServer3.Core.Configuration;
using Owin;
using AuthAndApi.IdOptions;
using AuthAndApi.IdUsers;


namespace AuthAndApi
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            //Configure IdentityServer for issuing authentication tokens
            var options = new IdentityServerOptions
            {
                Factory = new IdentityServerServiceFactory()
                .UseInMemoryClients(Clients.Get())
                .UseInMemoryScopes(Scopes.Get())
                .UseInMemoryUsers(Users.Get()),
                //SSL MUST be used in production
#if DEBUG
                RequireSsl = false
#endif
            };
            //add to the pipeline
            app.UseIdentityServer(options);
        }
    }
}

The basic IdentityServer is configured for OAuth. For now, we don't need a web page to open, since this is a server, so turn off displaying the page in project Properties.

2017-05-06_074519

Note about Reference vs JWT tokens
There are two general types of access tokens IdentityServer can return: Reference, or JSON Web Token (JWT). JWTs are self-contained; they can be validated without contacting the server. But they're slightly more complicated to configure. One advantage of using Reference tokens is that they can be quickly revoked, since the resource must call the authorizing server on each use. The disadvantage is this increases network traffic.

Pretend Mobile App - Get Token

References
Simplest OAuth Server
Simplest OAuth2 Walkthrough Code

Now we're going to get an authorization code and token from the server, using a native application that simulates a mobile app. In the Authorization Code Grant, the authorization code is retrieved by using an agent--normally a web browser. Mobile apps can open browsers on behalf of applications, providing a seamless experience. They can also register special URIs to receive requests. The basic flow is:

Front Channel

  1. App requests code from the server's authorization endpoint, and includes what URI to post the results to.
  2. Browser opens, server presents authentication/authorization screens, then posts code to supplied URI.

Back Channel

  1. App reads the code and sends request to the server's token endpoint.
  2. Server validates, then responds with the access token.

Add a standard console application to the solution, named PretendMobile.

2017-05-06_083908

Add packages. These aren't required (and wouldn't be available for a mobile application). They just include helper classes and methods for the HTTP communication. Anything they do could be done with HttpClient or some other appropriate network library.

Note
Version number is important! Later versions are for .Net Core.

Install-Package IdentityModel -Version 1.9.2 -Project PretendMobile

Program.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
//Added
using System.Diagnostics;
using System.IO;
using System.Net;
using System.Net.Http;
using IdentityModel;
using IdentityModel.Client;

namespace PretendMobile
{
    class Program
    {
        static void Main(string[] args)
        {
            var response = GetAccessToken();
            Console.WriteLine("Access Token: " + response.AccessToken);
            Console.ReadLine();
        }

        static TokenResponse GetAccessToken()
        {
            string clientId = "a065ae9f-0d02-45fa-85b6-4dc93e2ad5ef";
            string clientSecret = "e9711973-edd7-496f-b415-b10ad0667305";
            string scope = "email";
            string redirectUri = "http://localhost:19191/";
            string verifier = CryptoRandom.CreateUniqueId(64);
            string codeChallenge = verifier.ToCodeChallenge();
            string authorizationEndpoint = "http://localhost:27230/connect/authorize";
            string tokenEndpoint = "http://localhost:27230/connect/token";

            //Front channel to get the authorization code
            //This simulates the mobile app starting a browser session on the authorization server,
            //allowing the user to authenticate,
            //while the app waits for the redirect from the auth server that contains the code.
            OpenBrowserToAuthenticate(authorizationEndpoint, clientId, scope, redirectUri, codeChallenge);
            string code = ReceiveAuthCodeFromServer(redirectUri);
            Console.WriteLine("Authorization Code: " + code);

            //Back channel to get the access token
            //The client secret is still required by IdentityServer, even though it's not
            //really secret. The code challenge and verifier are the real secret.
            //The redirectUri is part of the verification process. There's no actual redirection.
            var client = new TokenClient(
                tokenEndpoint,
                clientId,
                clientSecret);
            return client.RequestAuthorizationCodeAsync(code, redirectUri, verifier).Result;
        }

        static void OpenBrowserToAuthenticate(string authorizationEndpoint, string clientId, string scope, string redirectUri, string codeChallenge)
        {
            string nonce = CryptoRandom.CreateUniqueId(64);

            AuthorizeRequest request = new AuthorizeRequest(authorizationEndpoint);
            string url = request.CreateAuthorizeUrl(
                clientId: clientId,
                responseType: "code",
                scope: scope,
                redirectUri: redirectUri,
                nonce: nonce,
                responseMode: OidcConstants.ResponseModes.FormPost,
                codeChallenge: codeChallenge,
                codeChallengeMethod: OidcConstants.CodeChallengeMethods.Sha256);

            Debug.WriteLine(url);
            Process.Start(url);
        }

        static string ReceiveAuthCodeFromServer(string redirectUri)
        {
            var web = new HttpListener();
            web.Prefixes.Add(redirectUri);
            Console.WriteLine("Listening for request from auth server...");
            web.Start();
            var req = web.GetContext().Request;
            Stream body = req.InputStream;
            var encoding = req.ContentEncoding;
            var reader = new StreamReader(body, encoding);
            string code = reader.ReadToEnd().Replace("code=", "");
            Console.WriteLine("Got it, closing.");

            //Not sure how, but the mobile app should end the browser session, maybe when the app
            //gets the response.
            body.Close();
            reader.Close();
            web.Close();
            return code;
        }

    }
}

Let's configure the solution to run both projects. Right-click Solution and choose Properties. Change the Startup Project to "Multiple startup projects," the Actions to "Start," and be sure the identity server project starts first.

2017-05-06_091226

Run the solution.

  1. The server will start.
  2. The console application will open a browser to the /authorize endpoint, requesting an authorization code.
  3. Authenticate using credentials "alice" and "secret".
  4. Authorize the requested scope(s).
  5. The authorization code is sent back to the listening application.
  6. The app sends a direct request to the /token endpoint, requesting an access token.
  7. The server returns the token.

2017-05-06_1044142017-05-06_1045022017-05-06_1046512017-05-06_104726

Wrap Up

We've built a basic OAuth2 server that uses the Authorization Code Grant plus PKCE for dynamically generated client secrets, and can get an access token. In the next part, we'll build a protected resource that requires a token.

Tuesday, March 28, 2017

TFS Continuous Integration Walk Through Part 5c - Multiple Solutions: Dependencies

ci-logo This is part of a series of walk throughs exploring CI in TFS, starting from the ground up. The entire series and source code are maintained at this BitBucket repository.

https://bitbucket.org/bladewolf55/tfs-ci-samples

Previous Part: TFS Continuous Integration Walk Through Part 5b - Multiple Solutions: Simple Project References




What I'm Talking About

Businesses often have source code that's years or decades old, and accumulate problems due to how that code was structured into folders. Especially difficult is how dependencies were managed. It's common to see:

  • Projects with project references to other projects in other solutions, sometimes nesting/cascading in a tangled mess.
  • Third party libraries that require installation, such as UI controls.
  • Multiple ways of managing .dll dependencies.

Some common challenges--and reasons why the above happen--are:

  • Multiple projects depend on a shared project, and they often need to step through the shared project's code.
  • Over the years, different developers did things how they liked.
  • Source control wasn't used, or changed.

Where I'll End Up

I'll start with a set of solutions that have some dependency problems. I'll show how they can work with continuous integration. Then, I'll improve the dependency handling.

A Problematic Structure

Let's imagine a TFS repository. Instead of a separate Team Project for each solution, there's a single Team Project named $Main that has all the solutions underneath it.

In this folder structure, I'm showing solution folders with their project folders below. So, ReverseIt is a solution folder with the ReverseIt project folder below it, which is the default Visual Studio layout.

$/Main
_Shared
   NameDb
     NameDb
ReverseIt
  ReverseIt
    > Depends on RevEngine (project reference)
  RevEngine
    > Depends on jamma.dll
ReverseNames
  ReverseNames
    > Depends on RevEngine (project reference)
    > Depends on NameDb (NuGet package)
  • NameDb DLL returns a list of names, is packaged using NuGet, and stored in a local source.
  • ReverseIt Console reverses whatever text you type in.
  • RevEngine DLL has ReverseText method. It is a project reference.
  • Jamma.dll is a third party security dll. The company is out of business.
  • ReverseNames Console displays a list of reversed names coming from NameDb.

What are the pros and cons of this approach?

Pros

  • If you Get Latest on $Main, all the solutions are in their correct relative folders.

Cons

  • You often have to get source you don't need.
  • The dependency on relative paths is brittle.
  • You can't use TFS's project management tools effectively.
  • Doesn't scale. What if you had fifty solutions using this approach?

Creating CI Builds As Is

My manager says, "We need to get these projects into TFS Build."

I ask, "Can I restructure TFS?"

He says, "Not yet."

I say, "OK."

Since I'm pretty sure there are dependency problems, the first thing I decide to do is spin up a clean machine, install Visual Studio with no changes, Get Latest on $Main, and try to build all the solutions.

What's this!? Multiple failures? Oh, no! What went wrong?

  1. ReverseNames failed because we're depending on an in-house NuGet package source, and didn't configure that, so the NameDb dependency didn't exist.
  2. RevEngine failed because Barry's the only developer who has ever worked on RevEngine, and only his machine has jamma.dll. It was never checked into source control.

Quite a bit more could go wrong, but you get the idea. Let's fix these with an eye toward our eventual build server.


BONUS CODE!

If I had lots of solutions, I could build all of of them using two files in the root of the folder than has all the solution folders.

BuildAllSolutions.targets

<Project ToolsVersion="14.0"
         xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
         DefaultTargets="Default">
  <!-- http://stackoverflow.com/a/9609086/1628707 -->
  <ItemGroup>
    <AllFiles Include=".\**\*.sln"/>
  </ItemGroup>
  <Target Name="Default">
    <PropertyGroup>
    <BuildCmd>@(AllFiles->'&quot;$(MSBuildToolsPath)\msbuild.exe&quot; &quot;%(Identity)&quot; /v:q /fl /flp:errorsonly;logfile=build-logs\%(filename)-log.txt','%0D%0A')</BuildCmd>
    </PropertyGroup>
    <Exec Command="mkdir build-logs" Condition="!Exists('build-logs')" />
    <Exec Command="$(BuildCmd)" />
  </Target>
</Project>

BuildAllSolutions.cmd

--rem path to your latest VS build version
"C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe" BuildAllSolutions.targets
pause

Running the cmd file creates a folder named "build-logs", recursively builds each solution, and outputs each solution's errors. If a solution's log file is not empty, there was a build problem.

END BONUS CODE


Dealing With a Local NuGet Package Source

There are four (technically five or six!) places to store NuGet config files containing package source information, and two ways to configure package source in TFS Build.

NuGet Config File Locations

Let's assume our in-house NuGet source is located at http://ngserver/nuget.

  1. User Profile - Enter it into Visual Studio's settings. This is fine for regular development, but not good for a build server because the build agent service will run as either Local System or a specific user account such as "tfsagent".

2017-03-17_155817

You can also manually edit the user profile's nuget.config, which is what the Visual Studio setting dialog is doing. The file is located at %APPDATA%\NuGet\NuGet.config. You add the source under packageSources.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <activePackageSource>
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
  </activePackageSource>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
    <add key="NuGet Local Source" value="http://ngserver/nuget" />
  </packageSources>
  <packageRestore>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
  </packageRestore>
  <bindingRedirects>
    <add key="skip" value="False" />
  </bindingRedirects>
</configuration>
  1. Solution - Create a solution-level nuget.config file.

You can create a file named nuget.config, put it in your solution's root and add it to source control. This will determine which NuGet sources the solution uses.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageRestore>
    <add key="enabled" value="True" />
    <add key="automatic" value="True" />
  </packageRestore>
  <packageSources>
    <!-- uncomment clear if you want to ONLY use these sources -->
    <!-- otherwise, these sources are added to your list -->
    <!-- clear /> -->
  <add key="NuGet Local Source" value="http://ngserver/nuget" />
  </packageSources>
</configuration>

Note: NuGet 3.3 and earlier looked for a config file in a solution's .nuget folder. Not recommended.

  1. Machine-Wide - Create a machine-wide config file.

The machine-wide story is confusing. A machine-wide NuGet config file can reside in one of two folders. The folder changed with the introduction of NuGet 4.0, which is used by Visual Studio 2017.

  • %ProgramData%\NuGet\Config\ (NuGet 3.x or earlier), or
  • %ProgramFiles(x86)%\NuGet\Config\ (NuGet 4.0+)

It can be named anything that ends with .config, including NuGet.config. However, a custom name seems recommended.

For example, I could name the file SoftwareMeadows.Online.config. It would contain the package source like this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet Local Source" value="http://ngserver/nuget" />
  </packageSources>
</configuration>

Network Administrators will like this option because they can use it with Group Policies. A policy could target computers in Developers and Build Servers groups and always create the desired config file.

Note: NuGet 4.x does not look for config files in ProgramData.

  1. Default Config

If you're using NuGet 2.7 through 3.x, default sources can also be configured in the file %ProgramData%\NuGet\NuGetDefaults.config. These show up in Visual Studio as local, not machine-wide, sources.

Note: This file and location will not work with NuGet 4.x

  1. Other?

You could also put a nuget.config file in any folder and specify it using the nuget.exe -configFile switch, e.g. nuget restore -configfile c:\my.config.

Notes

  • Testing shows that in some cases if the package source URL is the same, only one source key is used. For example, if NuGet.config and NuGetDefaults.config have an identical source URL, the key from NuGet.config is used.
  • It appears a source listed in NuGetDefaults.config cannot be removed using the <clear /> tag. It can only be disabled.

Specifying the Config in TFS Build

Whichever method you use below, ensure the agent service has permissions to the config file. The service name will be something like "VSO Agent ([AgentName])". Microsoft recommends creating a user named "tfsagent". The default is Local Service.

2017-03-17_162121

  1. TFS Machine-Wide Path - RECOMMENDED

Personally, for internal development, I'd add the package source to the build server's machine-wide config file and be done with it. So, my path--assuming VS 2015 installed--would be something like: %ProgramData%\NuGet\Config\SoftwareMeadows.Online.config

2017-03-17_162400

Remember from above this will change if you install Visual Studio 2017 on the build server (or use NuGet 4.x).

  1. Add nuget.config to the build agent's profile.

Your build server is basically a development machine, with an agent automatically building the software. If you run the service using tfsagent, you could create/edit a nuget.config file found at C:\Users\tfsagent\AppData\Roaming\NuGet.

  1. TFS NuGet Installer Path Field

If you check in a nuget.config file with the solution, enter the path in your build definition's NuGet Installer step. This path is relative to the Path to Solution. I would use this solution if my team didn't all work in the same network, and so needed to use an authenticated NuGet server such as MyGet.

2017-03-17_162452

  1. Use the -configFile switch

You could also put a nuget.config file somewhere on the build server (or network?), and use the -configFile switch. Remember the build agent service needs permission to read the file.

2017-03-17_162410

Dealing with Barry

Barry's been with us for five years. Barry drinks his coffee black, and lots of it. Barry knows where every file on his machine is, and would prefer you didn't look over his shoulder. Barry has his code, please leave it alone.

Unfortunately, Barry assumes he'll always be here, and hasn't ever tested what would happen if his machine imploded in a fiery death. I go to Barry and say, "Your code doesn't build on a clean checkout." Barry storms over to my computer and starts typing. I observe, take notes, and when I see him copying jamma.dll, ask, "What's that?"

"Oh," he mumbles, "license dll. Forgot about that. Kinda important. RevEngine won't run without it."

I don't say anything, but make a note that, once I have all the software building and deployable, Barry might not be long for our company. In the meantime, there are two ways I can handle this old dependency.

  1. Ensure it's in a folder under the solution or project, reference it there, and add it to source control.

2017-03-17_164300

  1. Create the dll as a NuGet package, and add it to my NuGet server.

Download nuget.exe and put in same folder as the dll (or put in its own folder and add to the system path variables).

Open a command prompt, change directories to where the dll is, and run nuget spec jamma.dll.

Edit the resulting damma.dll.nuspec file, change or remove elements as desired.

<?xml version="1.0"?>
<package >
  <metadata>
    <id>jamma.dll</id>
    <version>3.2.52</version>
    <authors>Jamma Ltd</authors>
    <owners>Jamma Ltd</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>License file</description>
    <copyright>Copyright 2017</copyright>
  </metadata>
</package>

Important: Now move the dll into a subfolder named "lib"

Package the dll using nuget pack.

Add the package (jamma.dll and jamma.dll.nuspect) to your NuGet server however is appropriate, might just be copy/paste, or using nuget push commands. See the NuGet documentation.

Remove the reference from the project, and re-add from NuGet. Build and test. If everything's OK, delete the old jamma.dll file and folder.

Which would I do? Number 2, so that all my external dependencies are handled the same way (NuGet).

It's Building, so Add to CI

I test again by deleting and re-getting all the source code, open each and restore any NuGet packages, and build all the solutions. Everything builds, so I'm ready to configure TFS Build for continuous integration.

All my solutions are under the same team project. I'll need to be careful when I create my build definitions that I'm only checking out and building the solutions I want. The key to that is not saving the definition until its Repository and Triggers have been configured.

I'll create the first definition in detail, then just list settings for the remainders.

But first, the NuGet Package Source

If I haven't done it already, I'll add a machine-wide nuget configuration file that has my custom package source.

Create the file %ProgramData%\NuGet\Config\MySource.config with the source definition. In my case, I'm testing with a local NuGet server.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet Test Server" value="http://localhost:7275/" />
  </packageSources>
</configuration>

NameDb

Let's start at the top with the simple one, NameDb. In TFS, navigate to the correct collection and the Main team project. Then open the Build tab, start a new build and choose the Visual Studio template.

2017-03-25_143134

Create using the repository source Main Team Project.

2017-03-25_143854

Delete the Index, Copy and Publish steps. I can add those back if we want them. I don't need the NuGet installer step, either, but I'll configure it as an example since the other projects need it.

2017-03-25_144248

Set the NuGet Installer path to the NameDb solution file.

2017-03-25_145021

Set the Visual Studio Build path to the NameDb solution file.

2017-03-25_145758

Set the Visual Studio Test "Test Assembly" to the solution folder path. This is the physical folder where the agent will have pulled the files from TFS, the same as the LocalPath path we'll choose in Repository. The path definition means "Run any dll with "test" in its name under the build definition name, but not in the obj folders." Since our build definition will be Release, it'll run Release/NameDb.Tests.dll.

2017-03-25_150526

Do NOT Save!

Change to the Repository tab. This controls what source code gets checked out by the build. Change the mapped paths. We'll keep the cloaked Drops folder, even though we aren't publishing anything yet.

Notice the LocalPath is set. This is so the contents of the NameDb solution folder are placed under a _Shared\NameDb folder, just like in the repository. It's not strictly needed for this build, but remember how this works if you're building multiple solutions with project dependencies in relative folders.

2017-03-25_151640

Change to the Variables tab and ensure the BuildConfiguration is "release".

2017-03-25_152836

Change to the Triggers tab. This controls what starts a build. Check Continuous Integration and set the path to the NameDb folder. We only want to build if a file in NameDb changes.

2017-03-25_153129

Now Save the definition and give it a good name like "DbName".

2017-03-25_153325

Finally, queue the build and see if it succeeds!

Important
Check each step's output for what you expect. Especially, check that tests ran! The test step will success even if it doesn't find tests!

2017-03-25_153438

2017-03-25_155144

For the next solutions, I'll just list the settings for the build definitions. All of them use the Visual Studio template, and only keep the NuGet, Build and Test steps.

ReverseIt

BUILD TAB

  • NuGet Installer path to solution: $/Main/ReverseIt/ReverseIt.sln
  • Build solution path: $/Main/ReverseIt/ReverseIt.sln
  • Test assembly path: ReverseIt\**\$(BuildConfiguration)\*test*.dll;-:**\obj\**

REPOSITORY TAB

  • Map Server Path $/Main/ReverseIt to Local Path ReverseIt
  • Map Cloaked Path $/Main/ReverseIt/Drops

VARIABLES TAB

  • BuildConfiguration: release

TRIGGERS TAB

  • Include $/Main/ReverseIt

ReverseNames

Strictly speaking, I don't need to build RevEngine. Any changes I make will trigger ReverseIt to build, and if it fails then someone--hopefully I--will be notified. What I do need to do is get the source code into the correct relative folder, and install the NuGet packages. In short, I need to ensure RevEngine can be used by ReverseNames.

So, I'm going to have two sets of steps; one for RevEngine, and one for ReverseNames. It's a brittle definition: Whoever, works on RevEngine needs to know about this build, too, in case something needs to change.

BUILD TAB

  • NuGet Installer RevEngine

    • Path to solution: ReverseIt\**\*.csproj
    • NuGet Arguments: -PackagesDirectory ..\packages

    Notice I set the path to the local project files. This will restore packages for any .csproj file found. I also explicitly say where to put the packages folder relative to the .csproj file.

  • NuGet Installer ReverseNames

    • Path to solution: $/Main/ReverseNames/ReverseNames.sln
  • Build ReverseNames

    • Solution path: $/Main/ReverseNames/ReverseNames.sln
  • Test ReverseNames

    • Assembly path: ReverseNames\**\$(BuildConfiguration)\*test*.dll;-:**\obj\**

REPOSITORY TAB
This is critical. I'm telling the build exactly which project folders to pull from ReverseIt, i.e. RevEngine and RevEngine.Tests. This way I don't pull and build ReverseIt.csproj.

If I add a test project later, I'll need to add its mapped path here. Note that I removed the Drops cloaked path since I don't need it.

  • Map Server Path $/Main/ReverseIt/RevEngine to Local Path ReverseIt\RevEngine
  • Map Server Path $/Main/ReverseIt/RevEngine.Tests to Local Path ReverseIt\RevEngine.Tests
  • Map Server Path $/Main/ReverseNames to Local Path ReverseNames

VARIABLES TAB

  • BuildConfiguration: release

TRIGGERS TAB
I'm triggering the build if ReverseNames changes.

  • Include $/Main/ReverseNames

Here are some screenshots of the ReverseNames definition.

2017-03-26_190426

2017-03-26_190546

2017-03-26_190620

Improving the Solutions, Dependencies and Team Projects

What I've done so far works. Sort of. But it's not exactly ideal, especially if there were fifty solutions, not just three. One big thing I lose is the ability to maintain separate project boards and work items per project. To do that, I'd really like a separate Team Project per solution (or in some cases it could be multiple solutions).

The team projects might look like this.

$\NameDb
$\RevEngine
$\ReverseIt
$\ReverseNames

And then there's the project reference. The project reference is bound to cause headaches in the future. One developer will change RevEngine and silently break the ReverseNames build. I say "silently," because it could be something like adding a new unit test project that doesn't get run by ReverseNames because it doesn't get pulled from source control.

Because it's a shared dependency, RevEngine needs to be in its own solution under _Shared and published as a NuGet package.

Right about now, someone's saying, "But but but! I need to be able to step through that code! And make changes that I can test against ReverseIt!!"

This might point to too much coupling between the projects, but so what? That's what you need. For debugging,

If you really need to change code in the context of the solution,

  1. Drop the RevEngine NuGet reference from ReverseNames.
  2. Get the latest RevEngine code into its _Shared\RevEngine folder.
  3. Temporarily add the project reference to ReverseNames.
  4. Do the work.
  5. When finished, drop the RevEngine project reference.
  6. ReAdd the NuGet reference (which doesn't have your changes, sorry).
  7. Open the RevEngine solution and run the tests.
  8. Commit the RevEngine changes, which, sorry, need to be taken through QA, published, etc.
  9. When that's finished, update the RevEngine NuGet package in ReverseNames.
  10. Run the tests, commit, QA, etc.

In other words, you need to treat RevEngine as if it were some third party assembly like Entity Framework or NLog.

All of this leads to...

Key Thinking to Managing Dependencies

  1. Treat your dependencies as if they're third party.
  2. Shared dependencies need to be in their own solutions.
  3. What does it take to check out, build and test the solution on a new computer?
  4. How would you store the project(s) on GitHub or other public remote repository?

The Plan

I'm going to do just three things, but they'll make a big difference.

  1. Reorganize the solutions into discrete team projects
  2. Publish shared project references as NuGet packages
  3. Update projects to use the packages

Important
Before getting started in a production environment, I'd disable all of the affected TFS Build definitions. I don't want anything running if I don't need to.

I would also make a backup copy of all the affected source code, just in case something gets lost.

Reorganizing into Team Projects

First, I'll create my new team projects. Then I'll move my code.

Note
You can also add team projects using the Visual Studio Team Explorer.

  1. Open TFS in the browser, e.g. http://nesbit:8080/tfs. Or, if you know your collection's name, you can open it directly and skip to step 4. (e.g. http://nesbit:8080/tfs/CICollection3/_admin)

  2. Click the upper right hand corner settings "gear" icon to open the Manage Server page.

    2017-03-27_085328

  3. Select the Collection holding your team projects, and click "View the collection administration page".

    2017-03-27_085913

  4. Click New Team Project, enter the information and Create.

    2017-03-27_090645

Repeat to create the four team projects.

2017-03-27_091136

We can't move the code using the TFS web application, so

  1. Open Visual Studio

  2. Open Team Explorer and click the plug icon to Manage Connections

    2017-03-27_091617

  3. Double click the collection you're using to connect to it.

    2017-03-27_091812

  4. Open Source Control Explorer.

    2017-03-27_091911

  5. The new team project folders need to be mapped to local folders. This is kind of a pain, but with TFS Version Control there's no getting around it. It's easier with git. I would create a new folder named something like TempTeams to hold the new team projects, finish the moves, then delete all my source code mappings and start over. Like I said, a pain. Be very careful when doing all this that you don't accidentally delete source code from TFS you didn't want to.

  6. To map a team project folder, select it and click the Not Mapped link. Enter the destination folder, and when prompted Get the latest code (there won't be any, that's OK). Map all the team project folders.

    2017-03-27_093743

  7. Open the NameDb solution. TFS still doesn't natively allow moving multiple files/folders at once, so we need to move the project contents one a time. First, I'll move the solution file. Right-click, select Move, and enter the NameDb team project. The file will be moved to the NameDb team project.

    2017-03-27_092818

    2017-03-27_094508

  8. Move the NameDb and NameDb.Tests project folders the same way. You can right-click and move an entire folder, just not multiple folders.

  9. When finished, Commit the changes.

  10. You can now delete the NameDb folder from under $/Main/_Shared and commit that change.

Now I'm going to move just the RevEngine project folders to the new RevEngine team project. Later, I'll create their solution file.

Open the ReverseIt folder. Move the RevEngine and RevEngine.Test folders.

2017-03-27_095844

2017-03-27_100012

At this point, I move the remaining ReverseIt files/folders to their new team project. Likewise the ReverseNames solution.

Commit the changes. Delete the folders from $/Main, and commit that change, too.

Do NOT delete the $/Main team project! The TFS Build definitions would be deleted, too.!

Finally, go to the RevEngine project in your local working folder, i.e. ..TempTeams\RevEngine\RevEngine, open RevEngine.csproj. This will open the project in a solution, we just haven't saved the solution file yet.

Add the RevEngine.Tests project to the solution.

2017-03-27_101947

Now I have to be careful. I select the solution in Solution Explorer. Then, File > Save RevEngine.sln As.

2017-03-27_102153

In the Save As dialog, I navigate up one folder, so my solution file is at the root of RevEngine.

2017-03-27_102556

Now, I drag and drop the solution file into the Source Control Explorer's RevEngine team project.

2017-03-27_102848

Commit the change.

My projects are reorganized, and a couple will build (NameDb and RevEngine). Time to turn handle the RevEngine dependency.

Publish shared project references as NuGet packages

I'm still working in the TempTeams folder. I'll wait until everything's working before going back to my preferred folders.

Creating NuGet packages can be complex. For this walkthrough, I'm showing the simplest thing that works; I'm sure these steps are not ideal. The following assumes I have a local NuGet server at http://localhost:7275/nuget that doesn't require an API key for pushing packages (not recommended), and does allow pushing symbols.

  1. Open the solution, and edit the RevEngine Project Properties > Application > Assembly Information.

  2. Ensure Title, Description, Company and Product are filled in.

    2017-03-27_113329

  3. Save and build the solution. You must build, because nuget packs the built dll. It does not build the solution for you.

  4. Download the latest recommended NuGet.exe file.

  5. Put nuget.exe in the RevEngine project folder.

  6. Open a command prompt and change directory to the RevEngine project folder.

  7. Run nuget spec to create a RevEngine.nuspec file

  8. Edit RevEngine.nuspec and make these changes:

    <?xml version="1.0"?>
    <package >
      <metadata>
        <id>$id$</id>
        <version>$version$</version>
        <title>$title$</title>
        <authors>$author$</authors>
        <owners>$author$</owners>
        <description>$description$</description>
      </metadata>
    </package>
    
  9. Run nuget pack -Symbols to create the regular and symbols package. Remember that, in our case, we want a symbols package so that we can step through the assembly without using a project reference.

  10. Run nuget push *.nupkg -Source http://localhost:7275/api/v2/package. This will push both of the packages.

Update projects to use the packages

This one should be pretty easy. In any solution that has RevEngine as a project reference, remove the project and the reference, then install the NuGet package. Notice that jamma.dll is installed as well, because RevEngine depends on it and the RevEngine project was referencing the jamma.dll NuGet package when it was packages.

After updating, if I open ReverseIt (for example), put a breakpoint on this line,

Console.WriteLine(util.ReverseText(text));

then run the program, I can step into RevEngine.TextUtilities.cs, which is now part of the debugging symbols.

Update TFS Build Definitions

It's time to get our builds working again!

TFS doesn't natively support copying/moving build definitions. One solution is to write code using the TFS web API to clone definitions:

http://www.eidias.com/blog/2016/3/7/tfs-2015-cloneimportexport-build-definition-between-team-projects

However, there's a TFS extension for this, which really saves the day. You can download it here.

Export/Import Build Definitions

Note
If using TFS 2015, you must use version v0.0.2. Later versions only work with TFS 2017.

Follow the instructions to install the extension.

To install 'Export/Import Build Definition' (EIBD) on Team Foundation Server 2015 Update 2 and above, perform the following steps:

  1. Navigate to the Team Foundation Server Extensions page on your server. (For example, http://someserver:8080/tfs/_gallery/manage)
  2. Click Upload new extension and select the file you have just downloaded.
  3. After the extension has successfully uploaded, click Install and select the Team Project Collection to install into.

To move NameDb:

  1. In the Main team project Build tab, right-click the build definition and choose Export. Save the json file to a folder such as TfsBuildExports.

    2017-03-27_144558

  2. Change to the NameDb team project Build tab. EIBD has a known limitation: the Export/Import menu items can only be seen on a build definition name, not the "All build definitions" item. So, if necessary, create an empty definition and save it with a non-conflicting name.

  3. Right-click a definition and choose Import, selecting the .json file.

  4. Edit the imported definition.

  5. Make the following changes.

BUILD

  • NuGet Installer path: $/NameDb/NameDb.sln
  • Build path: $/NameDb/NameDb.sln
  • Test path: *$(BuildConfiguration)*test.dll;-:*\obj*

REPOSITORY

  • Map $/NameDb, leave Local Path empty

    2017-03-27_145831

TRIGGERS

  • Include $/NameDb
  1. Test!

ReverseIt

Use the same approach as NameDb, namely changing the paths in Build, Repository and Triggers. (In fact, ReverseIt would work with the default Visual Studio template.)

ReverseNames

Likewise, ReverseNames can be simplified because I no longer have the RevEngine project to deal with. In fact, all I have to do is delete anything related to RevEngine, then update the remaining paths as I've done above.

RevEngine

This is a new build definition, and it follows the same simplified pattern as above.

What Just Happened?

I'll tell you what. Our build definitions got simpler because

  1. We converted our project references to NuGet packages.
  2. We contained our code in team projects.

Admittedly, the sample was a pretty simple case. I could have a team project that legitimately encompasses multiple solutions. But if I still apply the key principles from above, I can have clean maintenance and simpler builds. As a bonus, it should be much easier to switch to git if I want, since I'm now treating my code as discrete instead of monolithic.

Clean Up

I can now delete the $/Main team project. But, despite there being a right-click menu item, I can't do it from Source Control Explorer. So, (sigh), back to the web interface and my collection administration page. Select Delete from the dropdown to the left of the team project.

2017-03-27_101321

Am I sure the team project is empty? If so, enter its name and delete it.

2017-03-27_101437

References

Creating a Package
Configuring NuGet Behavior
Using Private NuGet Repository
How to Get TFS2015 Build to Restore from Custom NuGet Source 1
How to Get TFS2015 Build to Restore from Custom NuGet Source 2
NuGet Package Source Config Locations Introducing NuGet 4 Specifying NuGet Config path for TFS Build

Next Part: TFS Continuous Integration Walk Through Part 5d - Multiple Solutions: Build Settings