The Foldable HOC for


Thanks to Tanner Linsley have developed a great open source data-grid named react-table for ReacJs especially the react-table component. When working on a big project for my company and had been required to add-in a feature react-table that allows folding the columns.

So, I had developed a HOC for react-table for that feature and today, by writing this post I would like to share it with all the ReacJs developers.

FoldableTable HOC

FoldableTable is a HOC that make the columns are foldable. The reason I developed this HOC because when working on the real project related to the financial which display so many columns for validation and comparison.

So foldable columns allow users to temporary hidden the unwanted to columns so that they can focus on the data that they want to see.

How it works

It will scan all the columns which foldable is true and apply the foldable column feature. This feature will work for both normal columns and header columns as samples below.

  • With Header Columns

With Header Columns

  • With Normal Columns

With Normal Columns

  • The FoldableTable is also fully compatible with existing HOCs, below is with selectTableHOC.

With Normal Columns

State management

If you would like to manage the state of FoldableTable, then add the following codes.

Custom Components

There are two components shall be customized.

  • FoldIconComponent: to render the Icon of buttons.
  • FoldButtonComponent: to render the folding buttons for each Column. With default rendering as below.

Source Code

you can download the FoldableTable source code and samples here.

The foldableTable HOC had been submitted to merge with Hopefully, it will be available soon.

Hope this library is useful to you.

Enable Https Endpoint for Service Fabric Application


In this post, I would like to share with you on hosting a website in secure mode (https) in Service Fabric.

Assume that I have a ReactJs application is running on Service Fabric with HTTP endpoint as below.

Now, to secure this site by applying the SSL and let it’s running on HTTPs endpoint by following steps.

I. Generate SSL Certificate.

Generate SSL certification just for development purpose. If you want to host the application on PRD, you should request the certificates form your company CA provider.

For localhost Service Fabric I’m using below command to generate a certificate using OpenSSL.

  • Create private key.

  • Create certification

  • Convert cert to pfx

  • Import pfx file
    Open mmc.exe and import pfx file to Personal of Local Computer

If don’t have OpenSSL, download here.

II. Enable HTTPS for Service Fabric Application

1. Add application parameter.

Add a new parameter named CertThumbprint and value is Thumbprint of the certificate to Local1Node.xml and Local5Node.xml

We may have difference certificate for difference environments So we can push the Thumbprint value from the Continuous Delivery system.

2. Update the ApplicationManifest.xml

Adding one more CertThumbprint under Parameters section and one more EndpointCertificate name HttpsCert under Certificates section as below.

3. Update the Policies for Https endpoint

In the same ApplicationManifest add the below EndpointBindingPolicy config into ReactJs under ServiceManifestImport section which CertificateRef reference to the EndpointCertificate name added above and EndpointRef is a new service endpoint for https will be added later.

4. Update ServiceManifest.xml of ReactJs app.

Open ServiceManifest.xml of ReactJs and update the endpoint as below.

The name of Endpoint name and CertificateRef must be the same with EndpointRef and EndpointCertificate name which added into ApplicationManifest above.

5. Update the CommunicationListener

  • Using HttpSysCommunicationListener

If you are using HttpSysCommunicationListener then using the below setup for the endpoints.

Perfect, now run the application and we have a ReacJs is running https.

  • Using KestrelCommunicationListener 

With above configuration is not work for as KestrelCommunicationListener is not able to load the certificate from the configuration directly instead we need to add some additional code to load and binding the certificate manually as below.

The GetCertificateFromStore() method

Look into the GetCertificateFromStore you will see that I’m loading the certificate from the Config folder instead of from the Certificate Store. It means you can attach the certificate along with your application and binding it directly to the listener instead of import the certificate to every servers of Servicr Fabric cluster.

III. Working with Reverse Proxy.

However, the application is not accessible via Reversed Proxy because my local Service Fabric is running in unsecured mode and the Reverse proxy is supporting HTTP protocol.

In order to access the application via Reversed proxy, we need to run the Service Fabric in secure mode.

1. Run Local Service Fabric in a secured mode

In localhost run below PowerShell script to convert Service Fabric to secured mode.

After the done the installation, We will have secured Service Fabric cluster, and The application is accessible via Reversed proxy.

2. Run PRD Service Fabric in secure mode

If you PRD Service Fabric is unsecured then follow steps here to secure it.

Thanks for reading and download source code here.

Working with Service Fabric Reverse Proxy


The Reverse Proxy

Working with Service Fabric, you might hear about the Reverse Proxy, a built-in feature of Azure Service Fabric helps microservices running in a Service Fabric cluster discover and communicate with other services that have HTTP endpoints.

The benefit of Reverse Proxy is providing the standard uniform resource identifier format to identify the services running on Service Fabric Clusters. So you can access your services via Reverse Proxy regardless of the actual port of the services.

Refer here to understand more about the Reverse Proxy.

By default, After installed service fabric the Reverse Proxy will run on port 19081 and access the applications or services by following the format below.

Supported Platforms

Reverse proxy in Service Fabric currently supports the following platforms

  • Windows Cluster: Windows 8 and later or Windows Server 2012 and later.
  • Linux Cluster: Reverse Proxy is not currently available for Linux clusters.

So you should using Window 8 or 10 for develop in order to test the Reverse proxy.

The Service Fabric Application

Create a Service Fabric Application on Visual Studio you will see it allows to host various services inside.

There are 2 kindles of services:

  • The Internal services that are working as backend services and can be access by the other services within an application (using Microsoft.ServiceFabric.Services.Remoting.IService ).
  • The https services that allow users to operate with your data or an API services that exposing your data to other consumers outside of the application.

Check out here for more information about Service Fabric Application

In this demo, I created a Service Fabric application named MvcReservedProxy and added a ReactJs Mvc app. When running the application, I shall have a ReactJs app running on port 8383.

Port 8383 is a random value was assigned by Visual Studio when creating the service.

And here is the endpoint configuration in the service manifest.

The application is working perfectly fine with this port. However, let’s see if you have many apps running on PRD, and each application has some HTTP services inside, resolving the port conflict is a challenge as we need to ensure that the occupation ports are not conflicted with existing one in PRD environment.

Fortunately, Service Fabric doesn’t require the ports to be specified. Instead, it will pick-up the free ports when startup your services. After removed, the port from the endpoint and reran the application. I have a new port http://localhost:30001

Amazing, form now onward I don’t need to care about the ports of the HTTP services anymore.

But, whenever, my application got restarted, re-deployed it will have a new port. So, how can I access my application? Luckily, as mentioned above the Reverse proxy is using a specific uniform resource identifier format to identify the service. So, I can access my application via below URL:

However, the website is not working correctly as some resource files were not able to reach when accessing via Reverse proxy.

To fix this issues, we need to add some additional code to the ReactJs project as below.

I. Update ServiceInstanceListener

Open the ReacJs.cs and changes the ServiceFabricIntegrationOptions from None to UseReverseProxyIntegration

II. Update Mvc Startup configuration

Open Startup.cs and add below code into Configure method.

The ServiceNameUrl format is [YourApplicationName]/[YourServiceName].

III. Update Base Url for Javascript application

After the second step, the MVc application will work fine. However, if you are hosting javascript applications (ex: ReacJs or AngularjS), then you need to update the base tad into your _Layout.cshtml because the routing of ReactJs router will use it.

Use in here

Now, Rerun the application and see the result.

Cheers and the source code here.

Writing Unit Test for SMTP Component


I was working on a notification system that allows users to subscribe an email notification based on the status of the database record. An email will be sent to the destination whenever the status is satisfied.

As the system is providing various of email templates for users to choose when they’re defining the subscription. So the challenge to the development team is writing the unit test for each template and ensure the code coverage is more than 90% according to the DevOps strategy.

Writing this post, I would like to share the experience to write the unit tests for SMTP and how to perform the system integration test (SIT) without Exchange Email Server.

I. Writing Unit Tests For SMTP

As in the email template, I allow users to add some predefined parameters into email subject and body as the sample below:

  •  [DateTime]: before delivery email to destinations the system will transform this token to current date time.
  • [RecordId]: the system will pick up the primary key value of the record and replace to this token.
  • [CreatedBy] or [UpdatedBy]: if these tokens are in email subject it will be transformed to account email from active directory. If it is in email body then will be transferred to account name instead.

The requirement of the unit test for this system is able to verify not only template transformation but also the SMTP communication in order to limit the number of the code issues bugs leaking to SIT and UAT.

I found a useful open source library named netDumbster allows to simulate an SMTP service for developers to write the unit tests for SMTP module. This module allows running the unit tests in any build server without any additional installation needed.

Below is the source code for a sample notification service

And here is the unit test for this service.

The configuration of the Unit Test project.

You may face below issue when using this library. Because the SMTP is not stopping property and the unit tests process is still running.

To resolve this issue just simply force stop below processes in the Task Manager and then run the unit tests again.

Here is the result when running the auto build on the VSTS.

I would recommend using [AssemblyInitialize] and [AssemblyCleanup] to start and stop the SMTP emulator instead of [TestInitialize] and [TestCleanup] because start and stop the emulator is taking time and sometimes the SMTP is not stopped properly and keep holding the port 25 so that the next instance is not able to initialize for the second test method.


II. The SMTP Emulator for SIT and UAT.

There are so many reasons that we don’t want The testers using the real Product exchange server to perform the SIT or UAT testing. I don’t want to list out the reason here. However, if you are looking for an alternative solution that allows testers testing the SMTP without exchange server then there is a free and open source tool for that purposes.

The Papercut is an SMTP emulator that saving the incoming emails to a location instead of delivery to the real destinations. This tool provides a WinForms application that allows tester verify the email directly.

There is a Papercut service allows you host the SMTP emulator on to a server in the test environment and share across to all application.

III. Source Code

You can download the demo source code here.

Makes Your Laptop Becomes A Build Agent Of VSTS


You are a developer and working on some open source projects. You are using VSTS to maintain your source code. As you know the free version of VSTS is giving 240 minutes every month for the build. What happens if your free minutes have gone? You need to wait for until to next month to build and deploy your projects?

You are using the preview version of Visual Studio (currently is 2017 preview 3) to develop some projects on .Net Core 2 and .Net Standard 2 preview 2. However, the is no available build agent on VSTS for the preview framework yet.

I have a crazy idea why don’t use our laptop as a build agent for VSTS so that you can install whatever you want on your laptop worry less about the host compatible? If you are facing some difficulties above then this post I will show you how to makes your laptop becomes a build agent of VSTS.

1. Download Agent Package

Login to your VSTS and navigate to the Agent Queues and then click download agent. There is three package available for Window OS X and Linux. You shall able to download a zip package around 83MB for your window. This package contains all needed components for a build agent host.


2. Generate Personal access tokens

Click on you account icon at the top right and navigate to security then Personal access tokens. Create a new token named build-agent with the Authorized Scopes as below.

After clicking the save button, you will have a token look like this:


Refer here for details of Personal access token creating and revoking.

3. Agent Installation

  • Right click on the downloaded zip package -> properties and check the unblock checkbox then click ok.
  • Unzip the package into C:\vsts-agent.
  • Create a folder C:\vsts-build for the build this one will be used to contain the source code from VSTS when executing a build definition.
  • Run the PowerShell with administrator privilege and point the unzipped package (by using cd command).
  • Run the ./config.cmd to start the installation:

  • When running agent as service it will ask for the service account, I’m using Network Service as the service account for the agent. The reason to run the agent as service is whenever you start your laptop/Pc up the agent will be available in the VSTS immediately without any configuration needed.

Run the command .\config.cmd remove to un-install the agent on your laptop if need.

I like the way Microsoft develop the agent, your laptop doesn’t need a static IP for it, The agent will work whenever the laptop is up and connected to the internet care less about the IP and location. Bring your laptop everywhere you want and execute the build anytime you need.

4. Agent Configuration

Now navigate to the default agent pool on your VSTS your laptop name should appear. Before executing the build, you need to define the capabilities of the new agent as below. The Java capabilities on the below screenshot are needed if you are using some Java build tasks example Sonarqube tasks.


Here is the full text of the capabilities of my laptop for your reference purpose.

  • java: C:\Program Files\Java\jdk1.8.0_131
  • jdk: C:\Program Files\Java\jdk1.8.0_131
  • jdk_8: C:\Program Files\Java\jdk1.8.0_131
  • MSBuild: C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin
  • VisualStudio: C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise
  • VSTest: C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TestWindow

Now you can execute your build without the limitation. However, please note that this one for personal use only. I don’t recommend to use this on your company as exposing a server to the internet is not compliance and dangerous.

If your company is using VSTS, then should purchse the agents from Micrsoft or hosting everything (TFS and Agents) on the premises to maximize the flexibility is also a case.

This is the screenshot to prove that my laptop is working fine with VSTS.

HBD.Mef.Mvc Enhancement For WebApi


Like the previous post, I have introduced a Workspace for Web Mvc and along with the post, AzureNotes Module was provided as an example.

Now in this post, I would like to update you the new version 1.0.5 of HBD.Mef.Mvc. From this version, it will support the WebApi technology that allows developing a similar Workspace for Web API.

The HBD.Api.Shell

Similarly to HBD.Mvc.Shell, the HBD.Api.Shell is the Workspace allowing to develop a WebApi module in the different project and deploy to the Workspace lately.

Develop a WebApi module is simpler than develop a module for Wb Mvc because WebApi doesn’t have any resources (views, CSS files, and JavaScript files) other than controllers. Hence, deployment a module into the Shell is just copying the binaries and config files only.

To simplify the life, I have provided a general routing for all Modules including the Workspace as below.

This route is applying for all modules and all controllers. Hence, the pathway to access to a controller is api/{area}/{controller}{id} which {area} is Module’s name, {controller} is the controller name and {id} is optional.

The module registration also simplified instead of providing the Module Name and custom routing. Promptly with API module requires the name only. Below is sample code to register an API module.

The HBD.Mef.Mvc will base on the registered named and select the appropriate controller for the request. If there is no controller found for an appropriate request the HttpResponseException with 404 status code will be thrown.

Mef Controller Resolver Service

Please note that this registration is for Api modules only. There is nothing changed on the Mvc module registration that had been introduced on the previous post.

The AzureNoteApi

I would like to show you a similar module for HBD.Api.Shell called AzureNoteApi; this module is just exposing all actions of AzureNote in the previous post as WebApi. Below is the details information the Apis. You can use Postman to try out with my live demo.

Live Demo

The HBD.Api.Shell is also published on to Azure you might have a try by using Postman with the following API.

  • HBD.Api.Shell API

  • AzureNote API

Source Code

The latest source codes had been uploaded to Github you can download here.

Nuget Packager Unsupported Framework Issue on VSTS


I was facing an Unsupported Framework issue with Nuget packager on Visual Studio Team Service Online (VSTS) when porting my HBD Framework to the .Net Standard.

I had been mad with this issue in a couple of days, and eventually, I found a solution to fix it. Let me explain about the problem and the solution. So it might help for those who are facing the same problem.

1. Build and Deployment Process.

First, I would like to share my Nuget Publishing Automation Process. I have using VSTS as primary source codes repository sin last few years. In there, I created a few auto build and auto release processes that help to publish my packages into continuously.

Internally, I’m using the Package Management for VSTS as my test environment. So all beta versions of my packages have been published into this tool for testing purposes. Once the package is stable, I just simply trigger the other release to put it to

Below is the illustration of the publishing process that had been implemented on my VSTS.

I will show you the details of the continuous deployment in difference post as the purpose of this post is fixing the Nuget Packager issues.

Continuous Deployment

2. The Issue.

When ported my HBD framework to the .Net Standard, the Nuget Packager was showing the below warning message when packaging my binaries on VSTS.

After then The .Net Standard had displayed as the Unsupported framework on my package. Hence, my package was not able to publish to as it is not allowed an Unsupported framework to be deployed.

Furthermore, if open the nuspec file with Nuget explorer the frameworks is displaying properly.

Here is the configuration of my nuspec file.

3. The solution

After spending a lot of times on this issue, I realized that the problem is the version of default Nuget.exe file in Nuget Packager is old (3.x), and it is not supporting the .Net Standard.

So, to fix this issue, I downloaded the latest version of Nuget.exe here and replaced the default version of Nuget packager by the following configuration of the build definition on VSTS.

  1. Put the Nuget.exe file in the Shared location on VSTS.
  2. Map the Shared location to the Build Definition.
  3. Replace the Nuget.exe of Nuget Packager with the version in Shared location.


Then execute a new build and re-deploy the package to again, here is the result.

  • Hopefully the issue of the Nuget Packager will be fixed soon. However, ai this time if you are facing the same problem then just apply this as the work around solution for your build.
  • VSTS online is free tool. Register here if you want to learn about the auto build, auto deployment.

You can download the latest source code of HBD.Framework here.

Start and Stop Azure VMs Using Microsoft Flow


You have a few VMs on Azure and not ordered to be accessible at 24/7. However, you regularly forget to turn it off after use. As you know, keep the VMs running idly on Azure is wasted. So if you are looking for a tool that helps you to turn off your VM without login to the Azure portal? Then this post is for you. You might know about the Microsoft Flow application that allows automating the manual actions. So get used of the Flow, I would like to show you how to create a button on Flow to start and stop your Azure VMs.

1. Setup Automation Account and Runbook

Runbook is a helpful feature that allows automating the manual tasks in the Automation Account. So get used of this, the following steps will show you how to create and integrate the Runbooks with Flow. You shall found more information about the Runbooks here.

2. Implement Runbooks

To running a Runbook the Automation account need to be established. Here I will create a new Automation account named HbdAutoAccount.

Navigate to Runbook Gallery, look up and import  2 Runbooks Start Azure V2 VMs and Stop Azure V2 VMs with following names Start-Steven-PC and Stop-Steven-PC.

Now, I will have 2 Runbooks in the automation account as below screenshot.

3. Create Webhooks

Turn into each runbook, click Edit and Publish buttons before using it, because Azure is not allowing to run the unpublished Runbooks. If any change on the Runbook, remember to publish it again to get effective.

Next, go to the Webhooks position of Runbook and create a hook with the following information.

  • Name: Start Steven PC
  • Enabled: Yes
  • Expires: 1 year from today.
  • URL:
  • Parameters:
    • Resource Group Name: VMs, the resource group of  VM.
    • Name: Steven-PC, the VM name.

Create the same hook for Stop-Steven-PC. After 2 Webhooks had been set up, I have 2 URLs as below. These URLs will be used to create two buttons on Microsoft Flow.

  • Start Steven PC:
  • Stop Steven PC:

4. Create the start and stop buttons on Flow.

1. Create Start and Stop buttons on Microsoft Flow.

After logged into Microsoft Flow (if you don’t have Flow account just create one, it is free). Click on My Flow and Create From Blank then select Flow button for Mobile with action is HTTP:

  • Flow name: Start Steven-PC
  • Http method: POST
  • URL: the start Steven PC URL had been established previously.

Then click Create Flow button. After this step, I will have a Mobile Button on Flow to start the VM. Following the same for the stop button.

Download Microsoft Flow for iPhone, and now I can Start and Stop my VM on Azure by using my phone directly.

2. Enable the auto shutdown of Azure VM.

Moreover, the Azure VM has the other feature so-called Auto Shutdown. It will help to shut down the VM during certain time periods. In here, the shutdown time of my VM is 1:00 am. So in case I missed, It will shut down my VM at 1:00 am every day automatically.

  • With Flow button you can able to start and stop a set of VMs at the same time by adding the HTTP actions contexcturely.
  • As I have openned 2 webhooks for Flow buttons this is a Security gap so shiuld not disclose the hook URLs to anyone even your wife.
  • The expire date of the hooks should keep it as short as manageable and when it is expired, you should create a new one and update the URL back to the Flow buttons.
  • Recently, the Azure for mobile has been released that allow you to manage Azure Resources on the go. However, using that tool is required the Azure Subscription. In case you want to release the VM for someone that they don’t have subscription, Flow is still useful for you.

The Vietnamese version here.

Why Workspace? Why Mef? What HBD?


As recently, I’d shared a few libraries that allow developing a Workspace, and many people asked me, What is Workspace and Why should We care about it?

So, I come up with this post to share a few advantages of the Workspace. I’m not talking about disadvantage here because every Framework has the pros and cons itself. However, We choice it because it was suitable for our company, our departments and our teams or in other words is our customers choice it.

The Advantages Of the Workspace

  1. When developing a new application, we need to build the framework, define the project’s structure, the theme and layout in the IDE and beside of that we need to implement the services that will be used in the application as authentication service, workflow service, input, and output services. What happen if there are so many applications are developing on the different teams in your department, and eventually, each team come up with an application that has difference theme, layout, and components. Just consider Workspace is a Framework and it will help to manage the layout, theme, services consistently between the development teams. It also advises removing the duplication efforts when two or three teams are implementing the similar components.
  2. The other case. In your department, there are a hundred websites are operating and now your company just released a new logo and requested to replace the logo on all application. So, how many teams need to involve and how long it take? As you know re-branding is not only a logo but also the theme, color table, and fonts of the application are getting changed. However, if you have a Workspace you just need to enhance and re-deploy it again without impact to the modules.
  3. Let’s see, If your department already had the Workspaces. So when any team wants to build a new application they can use that Workspace as a framework and start to develop their modules instead of taking time for solution structure, framework defination, theme, and layout creation for the application.

The other Advantages already shared on the previous post. I would like to bring them here so that We will have a general view on it.

  1. Let’s imagine, if you have many development teams and tring hard to make the teams work independently and parallely. However, if three or four teams are working a the significant changes of a complex application and that application does not support modularization. So when any team needs to deploy the changed to the Stagings or Production environments, they need to inform all the other teams to ensure there is no conflict between the teams. After deployed, the other teams need to merge the changes into their source code branches. Managing this situation is a nightmare for the project manager.
  2. The other scenario, if the application doesn’t support modularization, so any change even the small one you need to conduct the System Integration Test (SIT) and User Acceptant Test (UAT) for the whole system because of the impact and the efforts will charge back to the business. You know, the business users may surprise why the simple change is costly?
  3. Micro-Services adoption: as you know, Micro-service is new technology that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack.

Why’s Mef

I’ve worked on many Dependency Injections as Ninject, AutoFax, Unity, QuickInject and StructureMap.

The functionality of them is the same because they have been implemented based on Ioc, Dependency Injection concepts. Similarly, The Mef had been released sine the .Net Framework 4.0

Only one thing different between the libraries is the way to import the type mapping and export the objects from the DI container.

If you ask me Why I like Mef? The answer is because of Love. The crazy love without reason. I think the accepted answer is because She is Microsoft’s daughter.

Hope the answer satisfies you.

What is HBD?

HBD (Hoang Bao Duy) is just a lazy, foolish developer, and all his projects were supposed for personal used only.

All HBD projects had been developed by himself. Obviously, There are a lot of silly code in his projects. If you found it, please help to point it out, that is the way you help him grow.

He is not recommending to use his library on your company projects because the enterprise project framework should be verified carefully by the professional development and testing teams.

Saying that doesn’t mean the libaries were came up with poorly quality. Just want to share more information about HBD libraries, all had been tested carefully before publishing, beside that the vulnerability scanning, code quality scaning are applied by using Sonarqube.comWhitesource-Bolt. The unit tests, code UI tests also a part of his projects in order to ensure all functionality of the libraries are working as expected. Furthermore the author had trying his best to improve the libraries, add new features and hotfix the defects promptly.

However, if you want to implement something but don’t know how to start or you want to have some referent code then the HBD open source projects are for you.

Source Code

All HBD projects had been published in here.

The Mef for MVC and Azure DocumentDb Demonstration 


As I’d already shared the Mef libraries for WPF, WinForms and Console Application. So In this post, I would like to share one more Mef library for .Net MVC.

As you may know, The MVC is supporting Areas that allow developing a set of Views, Controllers, and Resources (Images, CSS and Javascript) inside Areas folder. However, the areas are still a part of MVC application, and the resources need to be imported into the _Layout view manually and deploy along with the application. Beside of that, if any changes in the areas may impact to the whole application and need to be tested carefully before going live.

I. HBD.Mef.Mvc Introduction

PM> Install-Package HBD.Mef.Mvc

The Definition.

  • Workspace: this is an MVC website (a Shell) considering as Module container that allows deploying and run multi modules separately.
  • Module: this is a loosely couple MVC area that is implementing independently with Workspace and able to deploy into the Workspace lately.

So, get used of Area in MVC, I would like to introduce my HBD.Mef.Mvc library, it had been implemented based on the Mef technology allows you to build and manage the MVC areas as a module not only in a separate folder but also in different projects.

Before going to details of the library implementation, I would like to discuss a few questions below.

Why do we need the Modularization application?

  1. Let’s imagine, if you have many development teams and try to make the teams are working independently and separately as much as possible. However, if three or four teams are working a the significant changes of a complex application and that application does not support modularization. So when any team needs to deploy the changed to the Stagings or Production environments, they need to inform all the other teams to ensure there is no conflict between the teams. After deployed,  the other teams need to merge the changes into their source code branches. Managing this situation is a nightmare for the project manager.
  2. The other scenario, if the application doesn’t support modularization, so any change even the small one you need to conduct the System Integration Test (SIT) and User Acceptant Test (UAT) for the whole system because of the impact and the efforts will charge back to the business. You know, The business users may surprise why the simple change is costly?
  3. Micro-Services adoption: as you know, Micro-service is new technology that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack.
Micro Service Architecture

So the advantage of the Modularization application:

  • Allow the teams are working on the modules independently, parallelly.
  • Speed up and enable continuous development, delivery process: In term of the Agile development process. Normally, the sprint time is two weeks, and the recommendation to the development team is releasing a small workable feature of the application and demo to the Business users on the Sprint demonstration meeting.
  • Help to scale down the impact of the changes and increase the scalability, flexibility, and stability of the application.
  • Developing and maintain the automation tests or UI tests for a module is much easier for the whole complex application.

What is Mvc Module?

As you know, develop a Module is not just a Views and Controllers but also the settings, configuration, resources, Business and Data logics also. On the next sections, I will show you how to build a Workspace and create a module for that Workspace separately by using HBD.Mef.Mvc.

II HBD.Mef.Mvc Features

1. Navigations

Along with a Module is a set of navigation that We need to add into the top menu of the Workspace to allow users to navigate and using your modules functions. Let’s see, if developing a module and the navigation are adding directly to the _Layout view of the Workspace, and overwriting the  _Layout file of the Workspace in every deployment. In this case, the dependence happens again if there are more than one development teams are working the same Workspace and on each team, they also maintain a different version of _Layout view on their modules.

  • INavigationService

So in this library is provided an INavigationService allows registering the navigations into the Main Menu of Workspace dynamically.

Currently, the Main menu is supported two level only. You can add a navigation link directly in the main menu, or add a menu and navigation links are children of that menu.

  • IFooterNavigationService

Similarly, the IFooterNavigationService allows registering the links into Footer portion of the Workspace dynamically.

  • Navigation and Authorization

In the MVC some controller actions are required a particular Roles for the execution. So if the current user doesn’t have the necessary Roles, she is not able to execute that actions. In this case, the Workspace will hide all the navigation related to that actions automatically. If there are no visible children of a menu, the Workspace will hide the parent menu as well. It will help to save the main menu space for the other modules.

2. Resource Bundle Handling.

Working with MVC, you will know that the framework provides a feature called Bundle to allows to import the CSS and javascript files into the views.

Similar to navigation. Including all resources into the _Layout view of the Workspace is not recommended and it may conflict with the resources of the other Areas.

To resolve this issue. I have implemented a helper class in the library allows to register the module resources into the bundles and manage to render that bundles when accessing the module viewsIt means the Workspace will present the resources of the accessing module into the _Layout page at runtime. This feature will ensure that no CSS or java scripts conflicts between the Modules.

3. Controllers scanning.

You may be aware that, if we want to export a class into Mef container, we need to mark the class with Export attribute. What happens if you forgot to mark this attribute to the controllers? No worry this case had been covered in this library. It will scan all controllers in your binaries and export into Mef container automatically.

4. Configuration management.

As you know, running a .Net website the AppSettings and ConnectionStrings will be loaded into ConfigurationManager class of System.Configuration.

Definitely, on each Module, it will have a separate set of App Settings and Connection strings. Instead of adding the configuration into Workspace config file. You can keep it in a Wed.config and place it in the top level of your Module. The library will load and merge into the ConfigurationManager automatically.

Currently. Only AppSettings and ConnectionString sections are supporting. As the configuration will merge into Workspace configuration, so I would recommend using the Module Name as the prefix of the configuration keys.

The sample code to show you how to use all features above will be provided on the Module development below.

III. Develop a new Workspace.

The idea to implement an MVC Workspace as a core foundation application, that allows to add-in the modules and services independently.

I have developed a Workspace named HBD.Mvc.Shell using HBD.Mef.Mvc and published to Github, if you want to develop a new Workspace by yourself you can reference my source code in here.


However, I would like to highlight a few things as below.

1. Bootstrapper class in the App_Start folder.

After installed HBD.Mef.Mvc from Nuget, a new Bootstrapper class, will be added to App_Start folder automatically.

Only one thing you need to do is overwrite the RegisterMainNavigation method and add the navigation of Workspace in. Because the Menu bar, Footer bar will be rendering from INavigationService and IFooterNavigationService instead of maintaining the menus manually.

The rest of configuration, modules loading, resources management will be done automatically by Bootstrapper itself. However, all methods of Bootstrapper are in virtually so in case you want to customize the logic you can overwrite them easily.

Please note that the DisplayAt and AlignAtRight only supported int the root menu level for both The main menu and Footer navigation.

The FontAwesome and Glyphicon icons also supported on both levels of menu. If you want to display an image on the menus instead, you can replace the Glyphicon with a virtual path of the image location. When rendering the Workspace will scan the image from both Area and global Workspace folders and display the image properly.

2. Main menu and Footer partial views.

In HBD.Mvc.Shell Workspace, I had moved the main menu and footer of _Layout view to the partial views. So, in future if any changes on the menu rendering we just need to re-deploy the small part view files instead of a whole _Layout file. This will help to minimize the impact of the changes to the Workspace.


Navigation Partial Views

3. Navigation Authorization.

As mentioned above. The navigations are supporting the Authorization, and I would like to move to a separate section to show you how the navigation can pick up the Roles from a controller, or you can specify the Roles manually.

  • Auto Pickup the Roles from Authorize attribute.

The sample code to add a navigation for a controller action

With above code, we will add a navigation titled Import Account From File for the ImportFromFile action of ImportController and below is controller code.

The For extension method will check the Authorize attribute of ImportFromFile action to whether any Authorize with Roles had been specified and pick up the Roles for the navigation automatically. If there is no Authorize attribute had been found it will check at the Controller level for the same.

Note that this automation only happens when there is no Roles had been provided. It means if you already provided the Roles for navigation the auto role pick up won’t be executed.

  • Specify the Roles manually.

Below code is a sample for the manual Role specification:

The WithAuthorize is an extension method that allows you to determine the Roles for navigation. If calling WithAuthorize without any parameter provided, it means the navigation just require the current user is authorized.

IV. Quick start Module development.

To prove my library is working. I have implemented a simple module named Azure Notes. This module is a personal notebook and using Azure DocumentDb to store the note items.

Before starting with the Module implementation, we need to create the Azure DocumentDb first. There is two option for the developers.

1. Setup DocumentDb on Azure Portal

If you already had the Azure Subscription, you can log in to the portal and create a DocumentDb account and then create a collection with the following information.

  • Database name: AzureNotes
  • Collection Id: AzureNotes
  • Partition Key: AzureNotes
  • Storage Capacity: 10GB.
  • Throughput Capacity: 400 RU/s

If you don’t have Azure Subscription, you can install Azure DocumentDb Emulator here, and the create a similar collection as above.

2. Module Implementation

  • Create new Module

Create new Mvc Web application and install the latest version of HBD.Mef.Mvc from nuget. After installed you can delete all the redundant files and just keep the below folders and files.

  • Content folder
  • Controllers folder
  • Views folder
  • Scripts folder
  • all configuration files.

And then add a new class named <YourModuleName>AreaRegistration and then inherited from MefAreaRegistration in HBD.Mef.Mvc this class will help to register your module as an Area in the Workspace application.

Your module should look similar as below.

  • Register Module Routing

  • Register Main Menu Navigation

  • Register Footer Navigation

  • Register Module Bundles

  • Update the App Settings of Web.config file
    • Using Azure DocumentDb Emulator

    • Using Azure DocumentDb

The DocumentDB Account URL and Key can be found in the DocumentDb Account Keys

  • Controller and Views Implementation

You shall continue to implement your controllers and views logics as usually. However, remember to add the Export attribute to your Controllers.

3. Module Deployment

To deploy your module into Workspace. All the folder below need to be copied into Areas\<YourModuleName>\ folder. All binaries of your module will be copied to the bin folder of Workspace.

  • Debug Mode

Running your module on the localhost, you can use set below command line into the post-build event of your module. It helps to copy the necessary files and folders into the Workspace on every build.

  • Deploy to Production

Package your module as the structure below for the production deployment. So that, IT guys can help to copy the folder and files quickly or use the auto deployment tool to deploy the zip file into Workspace application.

    • Areas
      • AzureNote
        • Content: all CSS files of your module.
        • Scripts: all javascript files of your module.
        • Views: all *.cshtml files and folders of your modules.
        • Web.config
    • bin
      • AzureInterfaces.dll
      • AzureNote.dll
      • AzureNoteEntities.dll
      • AzureStorage.dll
      • DocumentDB.Spatial.Sql.dll
      • Microsoft.Azure.Documents.Client.dll
      • Microsoft.Azure.Documents.ServiceInterop.dll

After deployed, just recycle the application pool to re-initialize the Workspace to load new module and displayed on the screen.

4. Live Demo

I have hosted the HBD.Mvc.Shell and Azure Notes module onto my Azure, you may want to take a look at the link below. I just have 25$ on my subscription. So hopefully It is not going to down soon.

V. Source Code

  1. HBD.Mef.Mvc
  2. HBD.Mvc.Shell
  3. AzureNotes