A pleasant walk through computing

Comment for me? Send an email. I might even update the post!

Posh-git Fork to Allow Visual Studio to Use TFVC and Ignore Git

Source Code: bladewolf55/posh-git-tfvc: A PowerShell environment for Git, modified to use with TFVC-based solutions

I created this version of posh-git to handle a specific use case: a developer works for a company that requires using TFVC, but she wants to use Git locally to gain its rapid branching/merging abilities. Yet, she also wants to get the benefits of using Visual Studio's TFVC integration, namely CodeLens information.

Visual Studio supports both TFVC and Git version controls, and you're allowed to choose which one to use via Options > Source Control. Except, not really. If your solution has a .git folder or .git file, VS assumes you're using Git even if you also have a $tf file or your solution/projecs are configured for TFVC.

The general solution is:

  • Rename .git to _git
  • Set environment variable GIT_DIR to the full path to the _git folder
  • Set environment variable GIT_WORK_TREE to the full path to the working folder
  • Add line to .gitignore to ignore the _git folder

My modifications to the GitUtils.ps1 file accomplish that.

As a bonus, in the test _git folder there's a file containing a Git alias to initialize a repo using a _git folder and the appropriate .gitignore and .tfignore changes.

It's not fancy, and I'm sure a better developer can improve on it.

ASP.NET Core Controllers - Exploring How To Test a Simple Feature

Setup

Here's a brief feature description: When a user story is saved, if it's new then it's assigned the latest sequence number + 1.

How might this be developed:

  1. Using Test-Driven Development (TDD)
  2. ...in a web application
  3. ...that calls a service?

Puzzle 1: The Controller

In truth, this article isn't specific to ASP.NET Core. But it's what I was working on at the time, and I found the answer via a Core-specific article.

To illustrate where we can have mental hiccups, let's start with the controller and work backwards.

    [HttpPost]
    public IActionResult SaveStory(UserStory model)
    {
        if (!ModelState.IsValid)
        {
            return View(model);
        }

        //Save
        //The service takes care of setting the Sequence property on new models.
        model = _storyService.SaveStory(model);

        return RedirectToAction(actionName: nameof(Index));
    }

This is reasonable code. The controller passes the model to the service, and the service implements the business rule ofincrementing the Sequence property.

Take a minute and ask yourself: What would your controller's unit test...test?

Done? Now ask yourself: If I hadn't written any code yet, what would I test for?

Maybe your first answer started off something like this in your imagination.


    [Fact]
    public void SaveStory_increments_UserStory_Sequence_by_one()
    {
        //arrange
        var service = MockUserStoryService();
        service.MaxSequence = 10;
        UserStory userStory = new UserStory(service) 
        {
            // set some fields
        };
        var controller = new HomeController();
        //act
        var result = controller.SaveStory(model);
        //assert
        result.Should().Be()...uh...um....
    }

And there's the trap. What I, and I'm sure others, find hard about unit testing and TDD is being clear on the dependencies of what's being testing.

In TDD, ask yourself, "What is this unit going to do or change by itself?"

My first thought would be, "Well, the Sequence is going to change. That's the feature, after all." But that isn't what the controller is doing.

Assuming no errors, the only thing the controller does is pass the model to the service's SaveStory method.

The service is a dependency, and we don't test a dependency's behavior. Let me call that one out, because it's crucial.

In unit testing, don't test a dependency's behavior.

You always control the dependency's state, and always return a value you've determined. What you test is what the unit is supposed to do with that value. This is why we mock dependencies.

OK, what's the unit test for the controller? I admit, I was puzzled until I read Steve Smith's article, Test controller logic in ASP.NET Core | Microsoft Docs.

I should ensure that the service's SaveStory method was called. I don't need to test that something was saved, only that it should be. He's using Moq's Validate feature for this. I can implement a similar feature in a self-created mock.

    // https://stackoverflow.com/a/11296961/1628707
    // This is one of those cases where it's simpler to inherit Collection<T>
    // and add a couple of needed methods.
    public class CalledMethods : Collection<CalledMethod>
    {

        public CalledMethod this[string name]
        {
            get { return this.SingleOrDefault(a => a.Name == name); }
        }

        private CalledMethod AddAndReturn(string name)
        {
            
            if (this[name] == null) Add(new CalledMethod(name,0));
            return this[name];
        }

        /// <summary>
        /// Adds a <see cref="CalledMethod"/> if necessary and increments its <see cref="CalledMethod.Count"/>
        /// </summary>
        /// <param name="name"></param>
        public void Increment(string name)
        {
            var entry = this[name] ?? AddAndReturn(name);
            entry.Count++;
        }
    }
    
    public class CalledMethod
    {
        public string Name { get; set; }
        public int Count { get; set; }

        public CalledMethod() { }
        public CalledMethod(string name, int count = 0)
        {
            Name = name;
            Count = count;
        }
    }

Calling from the Mock class method.

    public UserStory SaveStory(UserStory story)
    {
        CalledMethods.Increment(System.Reflection.MethodBase.GetCurrentMethod().Name);
        CheckException();
        return UserStory;
    }

And using in the test.

    // assume arrange and act before this, then
    _storyService.CalledMethods["SaveStory"].Count.Should().Be(1);

Puzzle 2: The Service

We still haven't implemented the feature. In fact, arguably we shouldn't have written the controller or its test at all; the controller doesn't save the story, the service does.

Regardless, let's write the test first this time:

[Fact]
public void SaveStory_sets_new_UserStory_Sequence_to_Max_plus_one()
{
    var service = new UserStoryService();
    var userStory = new UserStory()
    {
        //set needed fields. Sequence is null or 0.
        
    };
    userStory = service.SaveStory(userStory);
    
    userStory.Sequence.Should().Be(???);
}

Yeah. We run into a question of how to setup the Max Sequence. But writing the test is helping us. We need to answer

  1. Does the service depend on something else to get the MaxSequence?
  2. If so, mock it
  3. If not, it will be a functional test

Let's assume our service depends on a data service, and finish the unit test.

[Fact]
public void SaveStory_sets_new_UserStory_Sequence_to_Max_plus_one()
{
    var dataService = new MockDataService();
    dataService.MaxSequence = 15;    
    var service = new UserStoryService(dataService);
    var userStory = new UserStory()
    {
        //set needed fields. Sequence is null or 0.
        
    };
    userStory = service.SaveStory(userStory);
    
    userStory.Sequence.Should().Be(16);
}

For you to figure out: What if SaveStory were a void method?

Functional Testing: The Proof of the Pudding Is In the Tasting

At some point, some piece of code is actually persisting data. There's no way to unit test that. If your service depends on an ORM such as Entity Framework (EF), then you can mock EF. But if you want to test that that your concrete UnitOfWork/Repository/DbContext/Whatever works as expected, you have to use a real database and check the values. Another example: if you at some point write to file, you'll need to write functional tests for that, and verify that what was written is what you expected.

Bonus: how might the functional tests look? Remember, these will be slower and likely run as part of a separate project, just like your integration tests.

public class DataServiceTests {
    Db _db = new Db();
    public DataServiceTests()
    {
        //In xUnit.Net, the constructor is used to reset the environment
        //to a known state.
        //There could be a lot of actions to take, so this is simplistic.
        _db.Reset();
    }

    [Fact]
    public void GetMaxSequence_returns_expected_value()
    {
        var service = new DataService(_db);
        //Our known starting point for MaxSequence is 10.
        service.GetMaxSequence().Should().Be(10);
    }
    
    [Fact]
    public void IncrementMaxSequence_sets_expected_value()
    {
        var service = new DataService(_db);
        // _db is reset before every test, so MaxSequence is 10 again.
        service.IncrementMaxSequence(1);
        service.GetMaxSequence().Should().Be(11);
    }
}

Wrap Up

TDD isn't nearly so much what to do, as how to think. Especially, I find it forces thinking about how to decouple code and make it testable. The tricky part, requiring practice, is seeing what are dependencies and what aren't. Knowing what your unit is responsible for.

I think learning from the simplest cases is great, because it teaches the principles to apply.

Remote Micro-Exclusions: Two Poor Daily Standup Practices

Remote (Micro) Exclusion

"Remote exclusion" happens when remote developers are treated as less equal than on-site developers. This usually isn't intentional, but is instead a result of group dynamics.

Some behaviors, such as not including remote workers in decisions because it's "too much of a bother" to contact them, are obvious when pointed out. But there are other actions that seem innocuous, yet contribute to the problem. These are"remote micro-exclusions"

Consider the daily standup meeting where the bulk of the team is on site, and a few are remote. Here are two practices that can unconsciously devalue the remote team.

No Video

There are three basic ways to communicate with the remote team in a live meeting.

  1. Audio Only
  2. One-Way Video
  3. Two-Way Video

The first two ways are a problem.

  • Audio Only: Unless someone on the remote team is vociferous, they'll be ghosts, rarely seen nor heard.
  • One-Way Video: To my mind, this is worse than audio-only. The implication is, "they can see us, be we don't need to see them." The on-site team only see their avatars, at best.

What's critically missing without two-way video is the visual cues. How is the remote team reacting? What are they seeing at the main site? What is everyone communicating physically?

Two-Way Video is a must because, "if words and body language disagree, one tends to believe the body language"1 And in those cases, body language can be 55% of communication.

Remote Team Last

If the remote team always goes last, it's likely they'll always have less time. Unless a daily standup is being run really strictly, there's going to be conversation about whatever each developer is working on. If there's no two-way video it's worse. The "main" team will tend to dominate the conversation because they can see each other. Consider that for a team of eight, a fifteen minute standup gives each person two minutes. That's honestly plenty of time to report what happened yesterday, what's being worked on today, what's blocking, getting quick answers, and setting up follow ups on issues that take too long for the meeting.

Yes, this is a standup management problem. "I'll email you to schedule a talk" should be said frequently. But the remote team is still devalued by going consistently last.

Solutions

  1. Everyone must be visible on video. Work as if everyone is remote.
  2. Start standups with the remote team for a week or two, to remind everyone they're equally important. Then, randomize the order people go in.

Remote work can benefit many companies and employees. It takes effort, but is a worthwhile practice to learn.

References


  1. Albert Mehrabian’s 7-38-55 Rule of Personal Communication It should be noted that these oft-quoted ratios have their limitations and critics.