I have been really fascinated by the types of bugs that only manifest under load. It is the kind of issue you really have little chance to reproduce while doing standard live debugging in Visual Studio. To bring these scenario closer to my traditional dev experience I am installing a CLI tool called Siege under Windows Subsystem for Linux. It can help me create http load and allows me to perform some basic benchmarking.

sudo apt-get install siege

I can now run my load test against any http endpoint, for example, in the follow command I am mimicking 100 simultaneous users (-c 100) for one minute (-1M):

siege -c 100 -t1M https://localhost:5001/lowcpu/uses-too-many-threadpool-threads-v1

I am also going to install a couple of .NET CLI tools (dotnet-counters and dotnet-dump) for monitoring and collecting managed dumps locally:

dotnet tool install --global dotnet-counters
dotnet tool install --global dotnet-dump

I can use either tool to get a list of managed processes (e.g. dotnet-counters ps ) and after finding the process I am interested in I can begin to monitor it more closely with this command:

dotnet-counters monitor -p 7932

Buggy Demo Code - Sync over Async

The web app I am running (represented by process Id 7932) is actually my BuggyDemoCode App which I use to deliberately recreate several variations of the sync-over-async anti pattern (and other bugs). This is the action on the controller I will be testing, and the code that is problematic comes down to .Result:

public IActionResult SyncOverAsyncResultV1()
    string val = legacyService.DoAsyncOperationWell().Result;
    return Ok(val);

I have a service that is asynchronous but the call to .Result forces it to synchronously wait for the data to return which unnecessarily ties up a thread pool. Thread pool threads are a limited resource you should be guarding as closely as CPU or memory. I captured a GIF of what dotnet-counters observes when running under load, notice how ThreadPool Queue Length and ThreadPool Thread Count are steadily increasing over time. This is really, really bad for you essentially you are bottlenecked but not on the resources we tend to immediately think of.


We can bring this code into compliance relatively easily, we need to transform the Action on this Controller into an asynchronous one and add the await keyword as follows:

public async Task<IActionResult> SyncOverAsyncResultFixed()
    var result = await legacyService.DoAsyncOperationWell();
    return Ok(result);

Now under the same load our ThreadPool Queue Length stays close to zero, meaning there are no threads waiting to execute . Additionally the number of thread pool threads required has been cut in half.


The message here is you are better off having all synchronous calls or all asynchronous calls. Mixing them creates the sync-over-async anti patterns we are all carefully avoiding.

Shout out to David Fowler for his excellent notes on Asynchronous Programming and also Mike Rousos for the tips on using siege and dotnet-counters.

Share on Twitter Facebook

Comment Section