What are containers and how to containerizing an Asp.Net MVC application using Docker

Somak Das
8 min readMar 15, 2018

--

In this article we are going to discuss about containers and how they are different from VMs and will give you a step by step guide for creating a demo asp.net MVC application and containerizing it using Docker engine. Also, we will do a hand on how to create windows VM in azure and setup Docker engine in the VM. We are going to use windows container for this demo.

Note: Setting up Docker engine in organisation's on-premise system is going to be a challenge due to permission and security issues so we will be going to use our Azure subscriptions to create VM if you don’t have already.

Prerequisites:

• Azure subscription

• Windows 10 VM with at least 4 GB RAM

• Knowledge of asp.net MVC

• Basic knowledge of containers and virtualization

What are containers?

For those who are new to the terms container and Docker, A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment.

Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.

At this point some of us may be thinking about Virtual machines and wondering how containers are different from VM, here is a good read for them:

And Docker is the world’s leading software container platform.

How did containers originate?

To know more about containers, I think we need to know a bit of history of the IT industry and how it evolved.

We are in a generation where business now a day are totally dependent on applications, where applications means business. Now to support those applications organizations used to have their own server (bare metal hardware) setup. But the problem with this system was that no one ever knew the exact amount of payload the application is going to face or no one had the exact no. of users of these applications so these organizations ended buying servers/hardware with capacity almost 10 times or may be 100 times more than what was required at that point.

Why? The answer is very simple what if they require to scale in future No one was ready to sacrifice business, right?

But eventually in 99% cases they would end up paying a lot more. Moreover, if a new application needed to be hosted again they would first buy and set up a server and then host the application on top of it. So, the upfront cost was huge and not to mention the maintenance cost and the time required to provision those servers.

Then came VMware with a concept of Virtual machines and hypervisors. Now though there were several other providers of Virtual machines, VMware was the most famous of them all.

With the onset of hypervisors now organizations can utilize all the unused resources. If currently say they have a single bare metal server and 1 application running in it, they could now create different VMs for different applications. Doing so would let them utilize the unused resources which was sitting idle for all the time.

With the Advent of cloud computing things changed a lot. Cloud providers like Azure and AWS introduced the concept of Pay-As-You-Go model. Which basically means users now no longer need to invest those huge amounts in buying servers rather they can use cloud VMs and pay for the exact amount of resource that they are using. Plus, with the power of cloud scaling those servers both horizontally and vertically became very easy.

But then no solution is proper and so VMs also had their shortcomings. They were bigger in size and would unnecessary consume hardware resources even if they are idle. Moreover, VM would contain a total Guest OS inside it thus making it large and time consuming for provisioning.

Then came containers, and Docker became the largest container provider. They came with huge advantages over virtual machines. Unlike virtual machines containers are light weight and they can be started and stopped in milliseconds. Thus decreasing application downtime to almost negligible.

containers vs VMs

Above is a very high-level comparison between VM and containers. Where you can see that in containers the thick layer of Guest OS as in case of VM is absolutely removed.

So that was a very high-level comparison of VM and containers. Now let’s proceed with our setup.

Step 1: Create a windows 10 VM using azure subscription.

Note: This step is required for those who does not have a VM setup already.

1. Login to portal.azure.com

2. Select VM tab and click on Add

3. Select the appropriate VM image, in my case I choose Visual Studio Enterprise 2017 on Windows 10 Enterprise N (x64)

4. Select deployment mode as Resource manager.

5. Fill in required details as in the image below.

6. Select an appropriate size of the VM.

Recommended: Select D2S_V3 Standard (Note: Please stop the VM after use in order to save your money)

Step 2: Connect to VM through local machine.

If you are using any public internet, you will be able to RDP directly to your VM’S public IP address or the DNS.

Note: If DNS name is set up dynamic IP address will be no issues while login. Else you can set up static IP address.

Step 3: Download and install Docker engine in VM:

Now that you have successfully connected to your VM, download the stable Docker community edition from the Docker site and install the Docker engine.

Note: Turn on HyperV as prompted during the setup.Imp: After installing and starting Docker, right-click on the tray icon and select Switch to Windows containers. This is required to run Docker images based on Windows. This command takes a few seconds to execute.

Step 4: Create a new ASP.NET MVC application.

Use Visual studio to create a new demo ASP.NET MVC application.

*Note: If you have not already setup Visual studio 2017 Enterprise edition use the product key from your MSDN subscription page and apply in Visual studio.

https://my.visualstudio.com/productkeys

**Do the above step if online login into Visual studio account with Organization ID does not work.

Step 5: Build and publish demo application.

Build the application and publish it to any specific location default location being bin\Release\PublishOutput.

Step 6: Create dockerfile for the application.

Create the following dockerfile in the application directory itself.

# The `FROM` instruction specifies the base image. You are# extending the `microsoft/aspnet` image.FROM microsoft/aspnetWORKDIR /inetpub/wwwroot# The final instruction copies the site you published earlier into the container.COPY /bin/Release/PublishOutput/ /inetpub/wwwroot

This is the basic dockerfile for asp.net mvc application. Where as you can see the base image is of asp.net mvc.

FROM microsoft/aspnet → base image

WORKDIR /inetpub/wwwroot → to setup workdir in the container

COPY /bin/Release/PublishOutput/ /inetpub/wwwroot → to copy the publish content from local machine/source (in our case the VM) to path inside the container.

Note: The path here is specific to the path where you have published the ASP.net application. Since I have used the default publish path I have kept the same in my dockerfile. This can be parameterized when using docker commands, but let’s keep that hardcoded for now.

Step 7: Docker build.

Now that everything is setup properly and publish has been done successfully we are going to containerize our application using Docker commands.

1. Open command prompt and go to project directory where we have created the dockerfile for our project. In my case I have kept it in the root directory of our project, as in the image above.

2. Use command docker build -t <yourimagename> . to build the docker image.

This command will take some time while running for the first time.

3. Once complete check out the list of docker images by using the command docker images

You should be able to see the docker image that you created.

Step 8: Docker Run

Now that we have already created our Docker image its time to run our image inside Docker container to check if its working properly.

docker run <imagename> -d

The above command will run the Docker image <imagename> in detached mode which is specified by -d flag.

You can also name the running container using –name flag like

docker run <imagename> -d –name <containername>

In my case I used the command:

docker run containerdemoapp -d –name mydemocontainer

To check all the running containers use docker command:

docker ps

Step 9: Check the application in browser

Due to a known issue localhost will not be accessible.

Note: With the current Windows Container release, you can’t browse to http://localhost. This is a known behavior in WinNAT, and it will be resolved in the future. Until that is addressed, you need to use the IP address of the container.

Use command:

docker inspect -f “{{.NetworkSettings.Networks.nat.IPAddress}}” <containername>

This command will return the IP address of the container that is running the application.

You can use that IP address to check the application in browser.

Conclusion: To sum up we have discussed about the basic concepts of docker and containers and we have also discussed on a very high level about the difference between containers and VMs. Lastly we have done a hands on by containerizing a ASP.Net MVC application and running it in windows container using Docker.

If you find this post helpful do share it with your friends, colleagues. Till then happy exploring. :)

--

--

Somak Das
Somak Das

Written by Somak Das

Cloud architect | Polyglot Programmer | Digital Entrepreneur | Affiliate marketer | www.instagram.com/digital_somak

Responses (1)