A pleasant walk through computing

Comment for me? Send an email. I might even update the post!

ASP.NET Core Controllers - Exploring How To Test a Simple Feature


Here's a brief feature description: When a user story is saved, if it's new then it's assigned the latest sequence number + 1.

How might this be developed:

  1. Using Test-Driven Development (TDD)
  2. ...in a web application
  3. ...that calls a service?

Puzzle 1: The Controller

In truth, this article isn't specific to ASP.NET Core. But it's what I was working on at the time, and I found the answer via a Core-specific article.

To illustrate where we can have mental hiccups, let's start with the controller and work backwards.

    public IActionResult SaveStory(UserStory model)
        if (!ModelState.IsValid)
            return View(model);

        //The service takes care of setting the Sequence property on new models.
        model = _storyService.SaveStory(model);

        return RedirectToAction(actionName: nameof(Index));

This is reasonable code. The controller passes the model to the service, and the service implements the business rule ofincrementing the Sequence property.

Take a minute and ask yourself: What would your controller's unit test...test?

Done? Now ask yourself: If I hadn't written any code yet, what would I test for?

Maybe your first answer started off something like this in your imagination.

    public void SaveStory_increments_UserStory_Sequence_by_one()
        var service = MockUserStoryService();
        service.MaxSequence = 10;
        UserStory userStory = new UserStory(service) 
            // set some fields
        var controller = new HomeController();
        var result = controller.SaveStory(model);

And there's the trap. What I, and I'm sure others, find hard about unit testing and TDD is being clear on the dependencies of what's being testing.

In TDD, ask yourself, "What is this unit going to do or change by itself?"

My first thought would be, "Well, the Sequence is going to change. That's the feature, after all." But that isn't what the controller is doing.

Assuming no errors, the only thing the controller does is pass the model to the service's SaveStory method.

The service is a dependency, and we don't test a dependency's behavior. Let me call that one out, because it's crucial.

In unit testing, don't test a dependency's behavior.

You always control the dependency's state, and always return a value you've determined. What you test is what the unit is supposed to do with that value. This is why we mock dependencies.

OK, what's the unit test for the controller? I admit, I was puzzled until I read Steve Smith's article, Test controller logic in ASP.NET Core | Microsoft Docs.

I should ensure that the service's SaveStory method was called. I don't need to test that something was saved, only that it should be. He's using Moq's Validate feature for this. I can implement a similar feature in a self-created mock.

    // https://stackoverflow.com/a/11296961/1628707
    // This is one of those cases where it's simpler to inherit Collection<T>
    // and add a couple of needed methods.
    public class CalledMethods : Collection<CalledMethod>

        public CalledMethod this[string name]
            get { return this.SingleOrDefault(a => a.Name == name); }

        private CalledMethod AddAndReturn(string name)
            if (this[name] == null) Add(new CalledMethod(name,0));
            return this[name];

        /// <summary>
        /// Adds a <see cref="CalledMethod"/> if necessary and increments its <see cref="CalledMethod.Count"/>
        /// </summary>
        /// <param name="name"></param>
        public void Increment(string name)
            var entry = this[name] ?? AddAndReturn(name);
    public class CalledMethod
        public string Name { get; set; }
        public int Count { get; set; }

        public CalledMethod() { }
        public CalledMethod(string name, int count = 0)
            Name = name;
            Count = count;

Calling from the Mock class method.

    public UserStory SaveStory(UserStory story)
        return UserStory;

And using in the test.

    // assume arrange and act before this, then

Puzzle 2: The Service

We still haven't implemented the feature. In fact, arguably we shouldn't have written the controller or its test at all; the controller doesn't save the story, the service does.

Regardless, let's write the test first this time:

public void SaveStory_sets_new_UserStory_Sequence_to_Max_plus_one()
    var service = new UserStoryService();
    var userStory = new UserStory()
        //set needed fields. Sequence is null or 0.
    userStory = service.SaveStory(userStory);

Yeah. We run into a question of how to setup the Max Sequence. But writing the test is helping us. We need to answer

  1. Does the service depend on something else to get the MaxSequence?
  2. If so, mock it
  3. If not, it will be a functional test

Let's assume our service depends on a data service, and finish the unit test.

public void SaveStory_sets_new_UserStory_Sequence_to_Max_plus_one()
    var dataService = new MockDataService();
    dataService.MaxSequence = 15;    
    var service = new UserStoryService(dataService);
    var userStory = new UserStory()
        //set needed fields. Sequence is null or 0.
    userStory = service.SaveStory(userStory);

For you to figure out: What if SaveStory were a void method?

Functional Testing: The Proof of the Pudding Is In the Tasting

At some point, some piece of code is actually persisting data. There's no way to unit test that. If your service depends on an ORM such as Entity Framework (EF), then you can mock EF. But if you want to test that that your concrete UnitOfWork/Repository/DbContext/Whatever works as expected, you have to use a real database and check the values. Another example: if you at some point write to file, you'll need to write functional tests for that, and verify that what was written is what you expected.

Bonus: how might the functional tests look? Remember, these will be slower and likely run as part of a separate project, just like your integration tests.

public class DataServiceTests {
    Db _db = new Db();
    public DataServiceTests()
        //In xUnit.Net, the constructor is used to reset the environment
        //to a known state.
        //There could be a lot of actions to take, so this is simplistic.

    public void GetMaxSequence_returns_expected_value()
        var service = new DataService(_db);
        //Our known starting point for MaxSequence is 10.
    public void IncrementMaxSequence_sets_expected_value()
        var service = new DataService(_db);
        // _db is reset before every test, so MaxSequence is 10 again.

Wrap Up

TDD isn't nearly so much what to do, as how to think. Especially, I find it forces thinking about how to decouple code and make it testable. The tricky part, requiring practice, is seeing what are dependencies and what aren't. Knowing what your unit is responsible for.

I think learning from the simplest cases is great, because it teaches the principles to apply.

Remote Micro-Exclusions: Two Poor Daily Standup Practices

Remote (Micro) Exclusion

"Remote exclusion" happens when remote developers are treated as less equal than on-site developers. This usually isn't intentional, but is instead a result of group dynamics.

Some behaviors, such as not including remote workers in decisions because it's "too much of a bother" to contact them, are obvious when pointed out. But there are other actions that seem innocuous, yet contribute to the problem. These are"remote micro-exclusions"

Consider the daily standup meeting where the bulk of the team is on site, and a few are remote. Here are two practices that can unconsciously devalue the remote team.

No Video

There are three basic ways to communicate with the remote team in a live meeting.

  1. Audio Only
  2. One-Way Video
  3. Two-Way Video

The first two ways are a problem.

  • Audio Only: Unless someone on the remote team is vociferous, they'll be ghosts, rarely seen nor heard.
  • One-Way Video: To my mind, this is worse than audio-only. The implication is, "they can see us, be we don't need to see them." The on-site team only see their avatars, at best.

What's critically missing without two-way video is the visual cues. How is the remote team reacting? What are they seeing at the main site? What is everyone communicating physically?

Two-Way Video is a must because, "if words and body language disagree, one tends to believe the body language"1 And in those cases, body language can be 55% of communication.

Remote Team Last

If the remote team always goes last, it's likely they'll always have less time. Unless a daily standup is being run really strictly, there's going to be conversation about whatever each developer is working on. If there's no two-way video it's worse. The "main" team will tend to dominate the conversation because they can see each other. Consider that for a team of eight, a fifteen minute standup gives each person two minutes. That's honestly plenty of time to report what happened yesterday, what's being worked on today, what's blocking, getting quick answers, and setting up follow ups on issues that take too long for the meeting.

Yes, this is a standup management problem. "I'll email you to schedule a talk" should be said frequently. But the remote team is still devalued by going consistently last.


  1. Everyone must be visible on video. Work as if everyone is remote.
  2. Start standups with the remote team for a week or two, to remind everyone they're equally important. Then, randomize the order people go in.

Remote work can benefit many companies and employees. It takes effort, but is a worthwhile practice to learn.


  1. Albert Mehrabian’s 7-38-55 Rule of Personal Communication It should be noted that these oft-quoted ratios have their limitations and critics.

Memstate: The Practical Argument for Big, In-Memory Data

Today I listened to a fascinating episode of Jamie Taylor's The .NET Core Podcast, featuring an interview with developer Robert Friberg. Do listen to it.

Friberg and his team have been working on Memstate, an in-memory database. When I first started listening, I thought "this sounds like one of those interesting, technical edge-cases, but I'm not sure I see the point." By the end, I not only understood, I wanted to start using it! But since I don't know how soon that will happen, here's the compelling meat.

Relational databases solve a problem that doesn't exist anymore. That problem is limited memory relative to dataset size.

Your OLTP data can probably all fit in RAM.

Here, simplistically, is how Memstate looks compared to a relational database.

In a relational database, the data is: Read into Memory > Changed in Memory > Saved to Transaction Log > Saved to Storage > (Potentially) Reread into Memory

In a memory image, the data is: Already in Memory > Changed in Memory > Change is saved to Transaction log

The idea of keeping all needed data in RAM has been around for awhile. Friberg recommends reading Martin Fowler's description of the pattern, and so do I. This tidbit helped me grasp how common the concept is:

The key element to a memory image is using event sourcing.... A familiar example of a system that uses event sourcing is a version control system. Every change is captured as a commit, and you can rebuild the current state of the code base by replaying the commits into an empty directory.

I think this is fantastically straightforward. Why persist state on every write? Why not just persist the change, and keep the state in memory? In the past, the answer was "We can't fit all the data into memory, so you have to read it each time." But as Friberg points out, you can configure an Azure virtual machine instance with 1 terabyte of RAM!

Friberg is clear that this method, like any, shouldn't be applied to all use cases. He nicely points out that even if you kept, for (his) example, all of IKEA's inventory levels for all its warehouses in memory, you wouldn't keep all the product descriptions and images. Consider what we already do today with blob assets:

  • We don't (typically) store images in version control. We reference it externally.
  • We keep images in cache because it's static.

(And, of course, it also doesn't mean you couldn't keep that data in memory.)

Imagine if your order entry system data was running fully in memory. Now, ask--not rhetorically--could it? The answer will very likely be yes.

What happens if you need to perform maintenance? Surely it would take forever to replay all those millions of commands (transactions).

Friberg gives an example where a very large dataset could be loaded back into memory in about ten minutes.

Finally, while most of the podcast discussed big data, I wonder about small data. One of the persistence stores for Memstate is--you guessed it--a file. I think this would be a great solution for any app that needs a small database. The state would load fast, the app would run fast, and there wouldn't be any of the overhead of using a database engine.1

Want to go a step further? If the store is a file, and if the amount of data isn't huge, and if absolutle top performance isn't necessary, then I'll bet the transaction information could be stored as plain text. And this means your data and history are future-proof.

And that's how I like my data.


  1. Not that, for instance, SQLite is big. I think it's just a .dll.