This article originally appeared on May 10, 2018.
Containers have been gaining steam at a rapid rate for the last few years and, as companies seek to run containers in production, new challenges occur; scaling, load balancing, deployment and service discovery are all prime examples. Container orchestrators are container technology that help solve these problems and increase developer productivity.
Azure Kubernetes Service (AKS) makes it fairly simple to set up and operationalize Kubernetes, a container orchestrator, in Azure. It takes care of tasks such as automating Kubernetes upgrades and cluster scaling. Because it’s Kubernetes, it also maintains application portability, meaning you aren't locked into Azure.
In this article, we’ll look at setting up an AKS cluster and deploying an application. The example is an MVC application that communicates with a back-end service using NATS. NATS is a simple, high-performance, open-source messaging system for cloud-native applications, Internet of Things (IoT) messaging and microservices architectures.
This example application will introduce us to a few of the fundamental Kubernetes concepts: pods, controllers, services and service discovery. It will also show us how we can structure an application to scale back-end services.
Getting started with AKS
Install Azure CLI.
Setting up an AKS cluster is easy using the Azure CLI. The Azure CLI is the command-line interface that allows you to manage your Azure resources. If you don't have it installed, you can find directions on Microsoft’s website.
Once you have the Azure CLI installed, log in and ensure the needed Azure service providers are enabled.
Create a resource group.
As AKS is still in preview, only some locations are available.
Create an AKS cluster.
Create a one-node AKS cluster using the Azure CLI. By default, this command creates a standard DS1 v2 Virtual Machine (VM) node. This step takes quite a few minutes to complete.
This step creates a new resource group and adds several new resources to create your cluster.
Connecting to the cluster
Kubectl is the Kubernetes CLI. If you need to install it, you can run this command using the Azure CLI:
Once you have the CLI installed, you can configure its context to point to your new cluster by using this command:
You can confirm everything is configured correctly by using the Kubectl get nodes command:
You should see the name of your cluster in the output. Voila, you have an AKS cluster set up in Azure, and you're ready to deploy applications.
Scaling the cluster
As I said earlier, the benefit of AKS is that it makes it easy to manage your Kubernetes cluster. For example, you can easily scale your cluster to two nodes. This command takes a couple of minutes because it’s spinning up and adding a new Azure VM.
Deploying an application
The example application is available via Github. The branch aks-getting-started-v1 was used at the time of this blog post.
This application consists of a front-end ASP.NET Core MVC application that sends commands via NATS messaging to a back-end .NET Core service. Within the k8s folder, you’ll find yaml files that describe the desired state of our application.
If you want to install and explore this application locally using Docker, you can run the PowerShell script local-deploy.ps1. This will use Docker compose to build and deploy the application.
In this example, each component to deploy consists of a deployment controller, pod and service. The following is a brief and high-level explanation of each component.
Pods are the basic building blocks in Kubernetes. Pods can run single or multiple containers. They’re designed as disposable entities that are mortal, so once they die, they stay dead, and a new pod is created.
A deployment controller allows us to describe the desired state of our pods. This controller will ensure we have the correct number of pods by recovering any pods that may fail. It also governs rolling out updates.
A service exposes and load balances between pods both internally or externally to the cluster. Every service in the cluster is assigned a DNS name, and our application will use this to integrate components.
Deploying a NATS container
Within the k8s folder, let's examine nats.yml. This yaml file consists of a deployment and a service that exposes the NATS container internally to the cluster. Use Kubectl to create the deployment and service on your cluster. After we apply the yaml file and look through the logs, we’ll examine the file in more detail.
Let's check and see if the pod was created and check the logs to see if NATS has started.
First, let's list our pods so we can see the status and use the pod name for displaying logs.
Next, show the log of the pod. We can see that the NATS server has started successfully.
Finally, get the services. We can see our new nats-service listed in the output.
The deployment describes the name, number of replicas, any labels and the pod template. The pod template describes how the containers should be configured. In our case, we’re using the container image nats:linux, and we want to expose the ports 4222, 6222 and 8222.
The service describes its name and ports we want to map and expose internally to the cluster. Type NodePort tells the service to expose this IP and port internally to the cluster. Later, when we examine the MVC web app, we’ll see how to expose a service externally.
It’s important to note that if using NATS in production, a NATS cluster should be configured. Get more information on setting up a NATS cluster on Kubernetes.
Deploying the todo-service
Within the k8s folder, let's examine todo-service.yml. This yaml file consists of a deployment. No Kubernetes service is needed because the todo service communicates via NATS and doesn’t need to be exposed to the cluster. Use Kubectl to create the deployment and the previous commands to see if the pod is running.
The log should show that the subscription has started.
As part of the container template, we specify the NATS connection via an environment variable. We use the name given to the NATS service and the port the service exposed internally to the cluster.
Deploying the todo-webapp
Within the k8s folder, let's examine todo-webapp.yml. This yaml file consists of a deployment and a service that exposes the MVC application externally. Use Kubectl to create the deployment and service. Again, we’ll use the previous commands to see if the pod is running.
The log should show that your ASP.NET Core application has started and is listening on port 80.
As part of the container template, we assign the ASPNETCORE_ENVIRONMENT variable and expose port 80. The service maps port 51101 to container port 80. The type LoadBalancer tells Kubernetes to use an external load balancer in Azure so the website is exposed externally. Creating this service can take some time. Check the service for an external IP.
You can see in the Azure Portal that this creates a public IP address.
Running the application
Navigate to the URL and port provided by Kubectl get services. The MVC home page should load.
Navigate to the todo section, enter a name and description, and then press Add. This sends a command via NATS to the back-end service. The MVC controller waits for a response and displays it to the view.
Check the logs of the service to verify it was processed.
You should see a message that your todo command was processed.
Realizing the full value of containers in production requires container orchestration. AKS makes it quick and easy to get started leveraging Kubernetes to build scalable applications in Azure.
Though not covered in this tutorial, .NET application modernization with container orchestration has a lot of potential. As support grows for Windows containers, it will become easier to modernize a legacy .NET application by containerizing it and deploying it in the cloud.
It also becomes easier to deploy new services to extend legacy application functionality. This approach saves IT organizations from having to do complete application rewrites and focuses on new business value and speed to market.
You should now have an idea about how easy it is to set up AKS and start structuring applications to take advantage of the platform.