Browsed by
Tag: C#

Deserializing abstract types using Newtonsoft.Json

Deserializing abstract types using Newtonsoft.Json

Recently I had to integrate with a “delivery” web service that provided it’s own contracts as part of a NuGet package. My immediate thought was “sweet, now I don’t have to do all of that boring typing to add all of the requisite types”. That was until I ran into some code that looks similar to this:

The problem

When making calls to the delivery web service, the response was expected to be of type Delivery which is shown above. The problem is that the Newtonsoft.Json serializer that WebApi uses by default is not able to determine which subclass of BaseDeliverable to use and therefore fails to deserialize the response. While this sucks for me, it’s not the fault of Json.Net as .

My first instinct was to change these contracts entirely so that these objects that were related from a business point of view were not so closely related in the code. That would have involved removing of the base class and adding a property to Delivery for each of the different types of delivery. Unfortunately, despite being the only consumer of the NuGet package, I wasn’t the only consumer of the service and the service itself used this contract package for (correctly) serializing responses to be sent to other clients.

The second option would be to change the delivery service to output type information but again, that would break the contract with the other consumers.

The solution

As I was not able to change the contract itself, I decided to change how the contract would be created when getting a response from the delivery web service. This led me to look into custom converters to see if I could determine which subclass to deserialize to based on the properties that the JSON object has.

After much Googling, I stumbled upon this article which had the majority of the code that I needed but I wanted to make it a bit more generic. Here’s my custom JsonConverter class:

One of the key differences to the example given in the linked article is that this converter takes a Type as a constructor parameter. When constructing the converter, all subtypes of the specified type will be stored. In ReadJson, we determine which type to deserialize to by trying to match all of the public property names on each of the subtypes with the JSON field names present. In this implementation there must be an exact match but this could easily be changed to be more forgiving.

The major learning from the linked article is that you cannot use obj.ToObject<T>(reader) or JsonConvert.DeserializeObject<T>(obj.ToString()) here because the same converter will be instantiated and called resulting in an infinite loop until a stack overflow is thrown! The method shown creates a new instance of the type that we’ve decided that we want and then populates that instance using the serializer which avoids calling the custom converter. Clever!

Conclusion

To be honest, I wish I didn’t have to write this code but it does solve a problem that was blocking me from adding value to the client’s product! I’ve written this post mainly as a reference for myself so I don’t have to do so much Googling if I am ever stuck needing to do this again!

Migrating a full-framework Windows Service to .Net Core

Migrating a full-framework Windows Service to .Net Core

With all the buzz around the release of .Net Core 2.0 many people are probably wondering how much effort it’s going to be to get their existing code over to .Net Core in Windows before they can even begin to think of making the app run in a cross-platform environment. Well, fear not! In this post, I’ll take a fairly simple Windows Service using the full .Net framework (4.6.1 to be exact) and convert it to a .Net Core 2.0 compatible app.

Full fat Windows service

The code for this service can be found on my GitHub. This service is a simple Windows Service that is intended to be deployed to AWS.

The service will register an SNS topic subscription using the amazing JustSaying
library. Once the service is started, it will publish a message and consume the message itself. The contents of the message will then be written to an S3 bucket.

Windows Service hosting is provided by TopShelf and the HTTP endpoints for AWS health checks are served via Nancy self-hosting.

If you’re interested; go and clone the code, hit F5 and have a poke around. You’ll see some console logs for the following:

  • A JustSaying bus get created
  • Some listeners for a topic set up
  • A test message published to that topic
  • The contents of the message get written to an S3 bucket

Note: if you are going to run this solution, be sure to do a find & replace for accesskey and secretKey and provide your own AWS credentials! You’ll also have to change the bucket name that gets set in the GenericMessageHandler as bucket names have to be unique.

Now the fun begins

The original solution was created in Visual Studio 2017 but Visual Studio Code is the de-facto editor of choice for .Net Core as it’s cross platform and has good support for .Net Core development. Setting up VS Code is not something that I am going to cover here but really all you need to do is download, and install both the app and the C# extension for VS Code and you’ll be good to go.

There is an element of cheating here. When I say “Windows service” in .Net Core, I actually mean a website hosted in IIS that listens to our messages. The website part will just be for our AWS health checks that we saw before in the full framework version and also, by being hosted in a web server, our app will be kept alive. Just like a Windows service!

Setting up your skeleton

Similar to the full framework solution, this solution will just have a single project that’s the same name as the solution. Below is the output from my creation of the project structure:

Looking at the highlighted lines above these are the actions that I’m taking:

  1. Creating a new solution
  2. Making a new directory for our project
  3. Changing to that new directory
  4. Creating a new .Net Core project using the webapi template (more about these templates here)
  5. Go back up to the solution directory
  6. Add our new project to the solution file (Note: solution files are kinda optional but I like them for organisational purposes and they play nicer with Visual Studio)

Now that we have a skeleton for our project, we’ll have a bunch of files in the folder:

Tidy up and build

Optional tidying up

Eagle-eyed readers will have spotted the folder named wwwroot in the section above. This folder is primarily used for storing static assets such as images and CSS files when building websites. We don’t need that so feel free to just delete this folder. If you do delete this folder, ensure that you also remove the reference to include it in the .csproj file. It should look something like this:

Set up AWS health checks

Being as we are going to run this on AWS, we need to add some of the keep-alive endpoints that it uses to decide whether the EC2 node that your code is on is healthy. The template generated a ValuesController for us. Rename this file to HealthController (VS Code is sometimes a bit weird and you may have to rename both the file and the class). You can then remove all of the code inside of the class (keep the imports, namespace and class declarations) and replace it with these two lines:

Now we have the same 3 health check routes that we used the Nancy self hosting for in our full framework version. Note the lack of App_Start folder with a RouteConfig class. That’s because we are doing all of the route configuration via attributes in the controller (back to the original MVC way!).

Note: you can have convention based routing if you wish, just set this up at Startup. More on this later.

Build

One thing to know before we get started: if you’re ever asked to restore stuff, hit yes, it’s just .Net SDK and NuGet things. If you’re ever asked to add configurations or similar by VS Code, say yes, it will add some scripts into a .vscode folder.

In VS Code, hit Ctrl + Shift + B (all the keybindings that I give will be default Windows ones) to run the default build task. As you haven’t set one up yet, you’ll be prompted by VS Code to add the default build configuration. Once the build finishes, you’ll see some output like below and new bin and obj folders will have been created for you. Congratulations, you’ve just built a .Net Core Web API!

Adding the packages that we need

If you have a look at the .csproj file that was generated, you’ll notice that it’s a lot smaller than the equivalent file in the full framework project (discounting the NuGet package references). This is because all of the required files no longer have to be explicitly referenced in the .csproj as the build process is now smart enough to vacuum up all of the files in the same folder and include them by default. You can still explicitly add files and folders but there’s no need here.

In the full framework project, there were a number of NuGet packages to handle stuff such as listening for messages. We need to install some similar (but not exactly the same!) packages in our project. Back to the console (I’m using the integrated Git bash terminal in VS Code if you were wondering), the command to add new packages is dotnet add package so let’s do that for the packages that we need:

  • JustSaying (add in -v 5.0.0-beta-313 as the .Net Core compatible version is a pre-release package at the time of writing)
  • AWSSDK.S3
  • Microsoft.Extensions.DependencyInjection
  • Microsoft.Extensions.Logging.Console

Whilst the first two packages should be obvious, the last two may not be: Microsoft.Extensions.DependencyInjection is only used here to provide an extension method for Microsoft’s built-in DI framework. We could do without this generic method and do some type casting instead, but this is a lot neater. Microsoft.Extensions.Logging.Console simply provides us with hooks to set up console logging for our app in an abstracted way (no Console.WriteLine() thanks).

After installing all of these, the .csproj file should look like this:

Porting the existing code

The simplest way to move things over is to copy the classes that we need! Copy over GenericMessage.cs, GenericMessageHandler.cs and GenericMessageService.cs to the same folder as the new .csproj file. Open these files up in VS Code now that they are in your project folder and delete all of the imports in each of these files because the imports will be a bit different and it’s easier to start from scratch.

No changes are needed for GenericMessage.cs as it’s just a POCO. Moving on!

A lot of the red squigglies can be fixed with some simple importing of namespaces and you should go ahead and do that to leave us with the real problems. For the other changes, let’s work through the files one-by-one:

GenericMessageHandler.cs

We no longer want to create a new logger in the constructor using the NLog LogManager and instead want to be passed the ILoggerFactory to create an abstracted logger that we can add providers for in our setup code.

After this change our constructor will look like this:

The _logger field is of the same type, ILogger, but this one comes from the Microsoft.Extensions.Logging namespace rather than the NLog one.

Although the name of the interface is the same, the methods for our new ILogger have slightly different names. It’s a quick job to change both the Info() and the Error() calls to LogInformation() and LogError() respectively.

The last change for this file is changing ContentType = ContentType.ApplicationJson to ContentType = "application/json" as the .Net Core AWSSDK doesn’t have the helper constant built in. No big deal.

GenericMessageService.cs

As above, change the constructor to take an ILoggerFactory and switch up the method names to remove some more red squigglies.

We’ll also have to pass our ILoggerFactory to the constructor of the GenericMessageHandler that gents created when setting up our JustSaying stack.

JustSaying v4 (i.e. full-framework) had a dependency on NLog but the pre-release v5 hooks in to all of the logging goodness that we’ve been seeing above so v5 no longer has the same dependency. NLog can still be used and we could set it up as a logging provider later but JustSaying is no longer tied to using NLog and the logging abstractions gives us more options. We can once again pass the ILoggerFactory when creating the stack for those hooks.

With those changes, creation of the JustSaying stack should look like this:

Startup and wiring everything up

If you open Program.cs you’ll see that it’s very different to our TopShelf host-builder code from the previous version. Currently, it does the following:

  • Creates a default WebhostBuilder with any command line parameters passed in
  • Tells it to use Startup as it’s start-up class
  • Builds the IWebHost
  • Starts running the web host

It’s the invocation of IWebHost.Run() that will start and keep our “website” alive. This is how we achieve the “always on” service-like behaviour but we also need to start our service and add some more logging hooks. After those changes, Program.cs will look like this:

We’re first splitting the build and running of the webhost in two. This is because any code in this file that’s called after Run() won’t get run. The additions to build the webhost have been highlighted in the snippet. All we’ve done here is tell our “website” to use IIS and added two different logging providers, one for the console and one to the Debug window. A good explanation of logging and how to add providers can be found in the Microsoft Documentation. This is where we’d hook in other logging providers such as NLog or Serilog as I mentioned above.

Whilst the webhost is held in captivity by the intermediary variable, we can pull our service out of the built-in DI container and start it. We can then let the webhost loose by calling Run().

It doesn’t look like we’ve done much here and we haven’t. All of the magic really lives in the Startup class that we told the webhost builder to use…

Startup.cs

If you have used OWIN before, the Startup class may feel familiar to you as it’s very similar to OWIN Startup classes. The purpose of this class is essentially to allow you to set up everything your application needs. Here’s what a “standard” Startup.cs looks like before we add any thing to it:

Extensive documentation on how this system works and what hooks it offers can once again be found over at the Microsoft documentation site but here’s a walkthrough of what is going on above:

  • The IConfiguration object passed into the constructor is used to get configuration values from whatever configuration providers (e.g. configuration files, environment variables or command line parameters) are set up. For information in how config is handled in .Net Core, take a look here
  • The ConfigureServices() method is where we can set up what we want to make available to our DI container. Above, the AddMvc() call is just an extension method to add a bunch of MVC services to the container in one call
  • The Configure() is where we can configure the HTTP pipeline and add any additional middleware components in. In the plain startup shown above, we’re telling our app to use the MVC framework and also to show us the developer exception page if we are running in a development environment. Once again, full documentation is available

The only thing that we need to add to ConfigureServices() is a single line to add our GenericMessageService to the container. The code for this is services.AddSingleton<GenericMessageService>();. I’ve put it above the line to add MVC service but it doesn’t matter where it goes.

For ConfigureServices(), a few more changes are needed. The full code for this method is below and I’ll walk through the changes:

All of the code that has been added has been to add additional behaviour when our application stops. Both IServiceProvider and IApplicationLifetime are method parameters that weren’t there before. The nice thing about the Configure() method is that any services that are available to it, can just be passed in to the constructor and the framework will do the rest.

Using IApplicationLifetime we’re registering a delegate, OnShutdown() to be called when the application stops. This local method will pull our GenericMessageService out of the IServiceProvider and call Stop() so that our messaging service can be shut down gracefully.

These additions to Configure() are not necessarily needed but I think they paint a good picture of what the purpose of this method is and how it can be used.

Go!

From here you can put some breakpoints in the code and hit F5 to start debugging! The debugging controls should feel familiar to anyone who has used Visual Studio before.

Wrapping up

I know that this is a simple example but in a lot of cases, I do think that transitioning to .Net Core is more around getting the startup and hosting right more than anything else.

The code for both the full framework and .Net Core projects can be found on my GitHub.

Functionally testing chatbots (part 2)

Functionally testing chatbots (part 2)

Introducing BotSpec!

In part one of this series I outlined the problems with testing Bot Framework chatbots and how there is a real gap in the tooling available right now. Today I can happy announce the, what I would consider the first usable version, of my own chatbot testing framework is now available on NuGet!

The goal for BotSpec is to provide a simple and discoverable interface for testing chatbots made the with Bot Framework. Hopefully I’m on the right path with the first version.

Simple use of BotSpec

Here’s some example code that shows off the basic features of BotSpec:

Code walkthrough

Set Up

In the OneTimeSetUp method we create an instance of Expect. This is the root object for interacting with and testing your chatbot. On creation, Expect will take your token and authenticate with the Bot Framework and start a new conversation.

Hello!

The first test, Saying_hello_should_return_Hello, shows the most basic of bot interactions; sending a message to the bot and expecting a response. Activities you send are most likely to be text-based messages (this includes button presses) but more complex activities that include attachments are also supported.

The next test creates the expectation that we will receive some activity that has text matching the phrase “Hello”. Pretty simple stuff so far, I know.

Drilling down into attachments

The last test shown, Saying_thumbnail_card_should_return_a_thumbnail_card, is an example of an expectation for an attachment that meets the 3 given criteria. The three expect statements will check that any activity received satisfies the condition given for each one. In our case one activity will satisfy all three as the one thumbnail card returned has all of the properties that we are looking for.

BotSpec features

As well as the simple things shown in the code example above there are a few nifty things that BotSpec can do to make testing easier.

Regex comparison and regex group matching

All of the methods that test strings are named {Property}Matching where {Property} is the name of the property is being tested. The word Matching is used because it is not an equality check; these methods take a regex and use that to check whether our property is what we are expecting. (Note: I am considering adding in some options for string checking with string.Equals() being the default and regex being one of the options).

In the majority of cases this will be enough but some times you will need to keep a part of a response from your bot to check later. When this is the case, there is an option for using a group matching regex. It’s a bit more complicated than the stuff we’ve seen so far so I will lead with an example:

We’re still using the TextMatching method but this one takes a few more arguments:

  • The first one is the regex that we used before; this will similarly be used to check whether the given property matches the regex supplied.
  • The second string looks very similar to the first but has brackets around the [\d]* part. The brackets form a capture group and whenever we see something that matches the regex, we keep the match inside the brackets for later
  • All of the matches that we get from our group match will collected and made available as a list of matches in an out param

Attachment retrieval and extraction

Attachments can either be sent with the activity or be referenced with a URL. Out of the box, BotSpec will work out for you whether to fetch the attachment via the given URL or to deserialise it from the provided JSON.

This is all using the the default attachment extractor which currently extracts attachments in the following way:

  • Works out whether the attachment content is a part of the activity or whether it resides remotely and needs to be retrieved
  • Selects all attachments that have the ContentType which matches the specified attachment type
  • Retrieves content with the provided URLs using the specified attachment retriever (more on that below)
  • Deserialises the content JSON to the specified type

The default attachment extractor can be overridden by setting AttachmentExtractorSettings.AttachmentExtractorType to Custom and assigning a custom IAttachmentExtractor implementation to AttachmentExtractorSettings.CustomAttachmentExtractor.

Similarly the default attachment retriever can be overridden by setting AttachmentRetrieverSettings.AttachmentRetrieverType to Custom and assigning a custom IAttachmentRetriever implementation to AttachmentRetrieverSettings.CustomAttachmentRetriever. The default uses a simple WebClient to download the content as as string (it’s expecting JSON so doesn’t handle image content very well at the moment).

Waiting for a number of messages before asserting

Some times you may be expecting a bot to return a set number of responses for a given interaction and can only be sure that your expectation is met once all of the messages have been received. An example of this could be that you ask your bot for your “top 10 selling products”. As bot messages are not guaranteed to be delivered in order, you may want to wait for 11 messages (1 for your bot to inform the user that it is looking and 1 message for each one of the top 10 products) before checking the content of these messages.

This is as simple as telling BotSpec how many activities you’re expecting before carrying on the assertion chain:

Currently (subject to change because I know this should be more flexible), BotSpec will wait for one second and then try again for a total of 10 tries before failing.

Part of the reasoning for this feature was also that the Bot Framework is still very new and my personal experience is that clients dealing with it should be as fault tolerant as possible.

Wrap up

If you like the sound of BotSpec, grab the NuGet and start testing all of your bots!

If you’re interested in how BotSpec works under the hood, I’ll delve into the inner workings in my next post but if you can’t wait until then, have a look at the code on GitHub. If you discover any bugs or have any suggestions feel free to raise an issue on the project or even create a PR and become a contributor.

Direct Line v3 and the new C# Direct Line Client

Direct Line v3 and the new C# Direct Line Client

Intro

One of the great things about the Bot Framework is that, out of the box, there’s a bunch of channels to hook your bot up to without having to worry about any of the plumbing of communicating with those services. the currently supported list of channels can be found here (although, the newly announced Microsoft Teams has not been added to that list yet).

But what do you do when the channels provided aren’t quite enough? That’s simple, you turn to Direct Line! Direct Line is a REST API for the bot framework that allows it’s users to create their own integrations. A great example of this is if you have a mobile app that you directly want to integrate a chat bot into. Microsoft aren’t going to make your app a channel for the Bot Framework as no one else will be able to ingrate with it but you can still get your users using your bot with Direct Line.

v3 of Direct Line

All of the documentation for the Direct Line API can be found on the Bot Framework site. Now, a little background on the first available version of the Direct Line API (v1.1); it sucked. It was quite flaky, prompts were styled as text and attachments were stripped out. There were a number of things that you could do to work around these issues but it was a pain. The new v3 version, however, is awesome and takes all of that pain away.

A client for v3

As well as releasing a new version of the API itself, the Bot Framework team released a new version of it’s Direct Client NuGet package. At the time of writing, the new version is still in beta so you will need to include pre-release packages in your search to find it. That particular fact caused me several hours of pain when trying to work out what was going on in a project of my own to later find out that I wasn’t using the latest package.

Getting started with the new Direct Line Client

Let’s take a look at the simplest way to get started using the Direct Line Client. The 3 things that we want to be able to do are; start a conversation, send some text as an activity and get the responses.

Start a conversation

To get started we need a class that creates an instance of DirectLineClient and calls StartConversationAsync on the ConversationsResource:

On line 12, we’re creating an instance of the client with our secret as an argument (your secret can be generated when activating the Direct Line channel for your bot). Lines 17 and 18 show how a conversation is started. We then want to keep a reference to the returned Conversation object so that we can tell the Bot Framework that any messages we send or retrieve are for this specific conversation.

Sending messages

Once our conversation is started, we can start sending messages:

The Activity object has many fields and it’s hard to find out what is the minimum required. To send a plain text message the only things required are your message as a string, a ChannelAccount object that identifies the sender and a string which tells the Bot Framework what kind of activity we are sending (in most cases, this will be message). The ChannelAccount requires an id that is unique to each sender (the Bot Framework also allows group conversations with bots). Once that’s all set up, calling PostActivityAsync with the ConversationId from earlier and our activity will send our message to the bot.

Retrieving messages

Retrieving messages is very similar to sending messages:

A call to GetActivitiesAsync with the ConversationId and watermark will ask the Bot Framework for any messages that are newer than the value of the watermark. With the response, we keep track of what the watermark is so that we only retrieve new messages every time. Although the Watermark property type is string, it’s actually just a sequence number starting at 1 for activity in a given conversation.

Summary

This is just a quick intro into using the new Direct Line C# client. There are a few other things that the client can do that we didn’t cover (resuming a conversation from before and uploading files to a conversation) but this should be enough to get going with.

A quick TL;DR would look like this:

  • Show pre-release packages when installing the Direct Line Client (currently, v3 is still in beta)
  • Initialise the client with your Direct Line secret
  • Start a conversation and keep a reference to at least the conversation id
  • When sending messages be sure to include a ConversationAccount with an id unique to each user and to send message as the type
  • When receiving messages, keep a reference to the current watermark to only get new messages
  • The code shown here is available on my GitHub

EDIT: I originally didn’t specify that an Activity sent to the Bot Framework required a type. Not including a type will cause the Bot Framework to ignore your message! Almost always this type will be “message” but at times, may be something else

Functionally testing chatbots (part 1)

Functionally testing chatbots (part 1)

This is the first post in a series of posts where I will talk about my experiences trying to functionally tests chatbots built with the Bot Framework. This first post will cover the roadblocks that I encountered when trying to create functional tests that were easily reproducable and automatable. The rest of the series will look at a framework for testing Bot Framework chatbots that I have recently been developing.

When I first started thinking about how to test chatbots that I’ve written I had the following thought:

Although the Bot Framework is very new, it should be straight forward to write functional tests because interacting with a bot is just sending HTTP requests to a web API.

Although, yes, you do interact with the bot via a web API over HTTP which already has proven methods of functionally testing; testing a chatbot is very different. Anything more than the most basic of chatbots will have conversation state and conversation flow to worry about. There is also the fact that chatbots can respond multiple times to a single user message and that after sending a message you don’t immediately get a response, you have to ask for any new responses since the last time you called.

I initially started writing unit tests using my usual trio of NUnit, NSubstitute and Fluent Assertions for testing, mocking and asserting respectively. These tests quickly became unwieldy and involved more setup than testing.

Mocking all of the dialog dependencies as well as the factory creating dialogs and everything that the DialogContext does quickly makes tests look and feel very bloated. Also, due to the way that the MessagesController (example of this in the second code block here) uses the Conversations static object, unit testing our controller is tricky and requires what I would consider more effort than it’s worth.

In bots that I have written my approach has been to treat dialogs like MVC/Web API controllers. What this means is that I try to keep them as thin as possible and only manage Bot Framework specific actions like responding to the user. Everything else is pushed down into service classes where I can unit test to my hearts content! Couple this approach with the difficulty in unit testing dialogs and the solitary controller responsible for calling the dialogs, I have opted to only cover them functional and end-to-end tests.

The one advantage to testing dialogs at a higher level means that a BDD style approach lends really nicely to conversation flow. Conversations can easily be expressed using a “Given, When, Then” syntax and this allows our tests to be easy to understand at a glance but also cover large portions of our conversation flow in one test.

Knowing that I wanted to use a BDD approach, I instantly added SpecFlow to a bot project and got to work working out how to write steps but then I discovered that SpecFlow currently doesn’t fully support async/await. Since the Bot Framework heavily relies on async/await, SpecFlow was no longer an option.

Similarly to the earlier unit test with masses of setup; even if SpecFlow had the async support that I needed, actually writing the tests in a succinct and clear way is still difficult. Let’s take the Sandwich Bot Sample as an example. This bot uses FormFlow to create a dialog that can create a sandwich order with a predefined set of steps. If we were to write an end to end test for a simple order in BDD style, it would have around 20 steps. Each one of those steps would either be sending a message or checking incoming messages for the content that we expect. We might receive multiple messages and have to check them all. Each one of those messages might have multiple attachments in the form of choice buttons. All of these buttons will have to be checked for the specific text that we want to assert on in our test.

This to me seemed like a there was something missing. I don’t want to keep writing code that checks all new messages from my bot for an attachment that has a button that has text matching a pattern. Also what do we do about the fact that a reply may not be instantly available after we send our message? Do we retry? Do we fail the test?

I’ve tried to take these questions and formulate them into a library that will ease the burden of testing chatbot conversation flow. It’s still a work in progress with lots of work still to be done but I believe that it can be useful for anyone writing chatbots with the intention of deploying them in enterprise where generally verification of new code is of high import.

My library can be found on GitHub but isn’t yet available as a NuGet as it’s not complete enough to publish (plus the name will probably change because the current one kinda sucks). The remaining posts in this series will be a look at how I have built this library, the reasons for certain architectural decisions and hopefully how it can be used for anyone building chatbots with the Bot Framework can ensure that it’s still doing what it’s supposed to.

IoC in the Bot Framework

IoC in the Bot Framework

My first post is going to surface some information that is pretty difficult to find and doesn’t exist all in one place. Hopefully it will be of benefit other people using the Bot Framework.

If you’ve used the Microsoft Bot Framework before, you’ll know that even though it’s well ahead of any other bot frameworks in terms of functionality, writing your bot code can be a bit tricky. There’s a bunch of hoops to jump through and the documentation isn’t always the most helpful.

One of the things that I’ve struggled with is that everything needs to be serializable. In theory this doesn’t sound like a problem until you want to separate your conversational bot code and your “service” code (i.e. calls to external APIs or data sources).

Solution 1: Service locator

The first solution that I came across was using a static factory for getting my dependencies.

This simple solution solved the problem with small amounts of code. Unfortunately it’s an implementation of the widely know anti-pattern; the Service Locator.

For all of the usual reasons, the static factory wasn’t great. There was no way to mock the services in the factory as they were just new’d up at run time, It definitely violates the Dependency Injection principle of SOLID and as I add more dependencies it will just grow and grow unless split into even more static factories!

Solution 2: Slightly improved service locator

The next evolution from here was to use an IoC container to create all of the dependencies and then to use the static factory to access the container and get the required services for the dialog class.

Slightly better than before. I can now set up my container in my tests and have the factory return mocked interfaces where appropriate. For quite some time, this is pattern that I used. I knew it wasn’t the best and I knew that it was still just a service locator.

Solution 3: Constructor injection via magic

I spent some time looking around for solutions and then finally in the depths of the BotBuilder GitHub issues I found these two issues raised by the same user:

https://github.com/Microsoft/BotBuilder/issues/106
https://github.com/Microsoft/BotBuilder/issues/938

Will gives a link to the Alarm Bot example in his comment. Here is a good example of how to create the AutoFac bindings required to use constructor injection.

So using some AutoFac magic and a badly documented aspect of the Bot Framework we can add dependencies as constructor arguments and not have them serialized!

Following the AutoFac guide for WebApi, I first installed Autofac.WebApi2 NuGet package and then updated my Global.asax.cs to look like this:

Now I get all of the usual benefits of doing dependency injection in a more traditional way. I can create mocks and pass them into the dialogue for my tests, the dialogue doesn’t have to worry about where the service comes from and I can more easily control the lifecycle of my dependencies.

Realistically I don’t want to write unit tests against my dialog as there are is too much to mock (IDialogContext does a lot of stuff!) but now I can easily extract my conversational logic to another class and test it in isolation but that’s for another post!

EDIT: this DI approach has since stopped working for me despite being the officially documented strategy. I’ve opened an issue on the Bot Framework’s GitHub. There’s been some discussion and it has since been tagged with “bug” but it doesn’t look like it’s going anywhere fast. Stay tuned for more updates!