In this blog post, I would like to introduce and explain Azure’s new service offering for Application Gateway Ingress Controller (AGIC) for Azure Kubernetes Services (AKS). AKS is gaining momentum within the enterprise and Microsoft has been refining the offering to drive adoption. This feature piques my interest because of its capability to improve latency and allow for better application performance within AKS Clusters. I will first discuss the basics of the application gateway Ingress controller. Then, I will dive into the underlying architecture and network components that can provide performance benefits. In addition, I will also explain the improvements and differences this offers over the already existing in-cluster ingress controller for AKS. Finally, I will discuss the new application gateway features that Microsoft is developing to refine the service even further.

In definition, the AGIC is a Kubernetes application that is like Azure’s L7 Application Gateway load balancer by leveraging features such as:

  • URL routing
  • Cookie-based affinity
  • SSL termination or end-to-end SSL
  • Support for public, private, hybrid web sites
  • Integrated Web Application Firewall (WAF)

The controller runs in its own pod on your AKS. It monitors and communicates routing changes to your AKS through the Azure Resource Manager to the Application Gateway which allows for uninterrupted service to the AKS no matter the changes. Below is a high-level design of how the AGIC is deployed along with the application gateway and AKS, including the Kubernetes API server.

Azure Resource Manager

Next, let’s take a closer look at the AGIC and breakdown how it differs from that of the already existing In-Cluster Ingress Controller. The image below shows some distinct differences between the two. The in-cluster load balancer performs all the data path operations leveraging the AKS clusters compute resources, because of this it is competing for the same resources as the application in the cluster.

In-Cluster Ingress Controller

In terms of networking features, there are 2 different models you can leverage when configuring you AKS. They are Kubenet and Azure CNI:

Kubenet

This is the default configuration for AKS cluster creation, each node receives an IP address from the Azure virtual network subnet. Pods then get an IP address from a logically different address space to that of the Azure virtual network subnet. They also use Network Address Translation (NAT) configuration to allow the Pods to the communication of other resources in the virtual network. If you want to dive deeper into Kubenet this document is very helpful. Overall these are the high-level features of Kubenet:

  • Conserves IP address space.
  • Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
  • You must manually manage and maintain user-defined routes (UDRs).
  • Maximum of 400 nodes per cluster.
  • Azure Container Networking Interface (ACNI)

The other networking model is utilizing Azure Container Networking Interface (ACNI). This is how the AGIC works, every pod gets an IP address from their own private subnet and can be accessed directly, because of this these IP addresses need to be unique across your network space and should be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node is then reserved upfront for that node. This approach requires more planning, as can otherwise lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.

Azure Virtual Network

From the Azure documentation, they also outline the advantages of using ACNI.

Azure CNI

Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
Requires more IP address space.

Performance

Based on the network model of utilizing the ACNI, the Application Gateway can have direct access to the AKS pods. Based on the Microsoft documentation because of this, the AGIC can achieve up to a 50 percent lower network latency versus that of in-cluster ingress controllers. Since Application Gateway is also a managed service it backed by Azure Virtual machine scale sets. This means instead of utilizing the AKS compute resources for data processing like that of the in-cluster ingress controller does the Application Gateway can leverage the Azure backbone instead for things like autoscaling at peak times and will not impact the compute load of you ASK cluster which could impact the performance of your application. Based on Microsoft, they compared the performance between the two services. They set up a web app running 3 nodes with 22 pods per node for each service and created a traffic request to hit the web apps. In their findings, under heavy load, the In-cluster ingress controller had approximately 48 percent higher network latency per request compared to the AGIC.

Conclusion

Overall, the AGIC offers a lot of benefits for businesses and organizations leveraging AKS from improving network performance to handling the on the go changes to your cluster configuration when they are made. Microsoft is also building from this offering by adding more features to this service, such as using certificates stored on Application Gateway, mutual TLS authentication, gRPC, and HTTP/2. I hope you found this brief post on Application Gateway Ingress-Controller helpful and useful for future clients. If you want to see Microsoft’s whole article on AGIC click here. You can also check out the Microsoft Ignite video explaining more on the advancements of Application Gateway.

AIS is pleased to announce the availability of the NIST SP 800-53 Rev 4 Azure Blueprint Architecture.

(NIST SP 800-53 security controls could be an entire series of blog posts in itself…so if you want to learn more than I cover here, check out NIST’s website.)

The NIST SP 800-53 Rev 4 Azure Blueprint Architecture applies a NIST SP 800-53 Rev 4 compliant architecture with the click of a button to the Azure Government Cloud subscription of your choice. Okay, maybe there are a few clicks with some scripts to prep the environment, but I swear that’s it!  This allows organizations to quickly apply a secure baseline architecture build to their DevOps pipeline. They can also add it to their source control themselves to accompany their application source code. (If you want assistance in implementing this within your greater cloud and/or DevOps journey, AIS can help with our Compliance and Security Services offering.) Read More…

At our second session for Microsoft Ignite, Jason McNutt and I discussed Azure Resource Manager (ARM) and Compliance. We showed attendees how to develop ARM templates that are compliant out of the box, with security standards such as FISMA and FedRAMP. Additionally, we went over how to automatically generate security control documentation based on ARM tags and open-source libraries like OpenControl.

Below is a short 15-minute video summarizing our Secure DevOps with ARM presentation:

Microsoft has over a thousand Virtual Machine images available in the Microsoft Azure Marketplace. If your organization has their own on-premises “Gold Image” that’s been tailored, hardened, and adapted to meet specific organizational requirements (compliance, business, security, etc.), you can bring those images into your Azure subscription for reuse, automation, and/or manageability.

I recently had the opportunity to take a client’s virtualized Windows Server 2008 R2 “Gold Image” in .OVA format (VMware ), extract the contents using 7-Zip, run the Microsoft Virtual Machine Converter to create a VHD, prepare and upload the VHD, and create a Managed Image that was then deployed using PowerShell and an Azure Resource Manager Template.

It’s actually quite simple! Here’s how… Read More…

(Part One of this series can be found here.)

Deploying Azure ARM Templates with PowerShell

After you’ve created your template, you can use PowerShell to kick off the deployment process. PowerShell is a great tool with a ton of features to help automate Azure processes. In order to deploy Azure ARM Templates with PowerShell, you will need to install the Azure PowerShell cmdlets. You can do this by simply running the command Install-Module AzureRM inside a PowerShell session.

Check out this link for more information on installing Azure PowerShell cmdlets. PowerShell works best on a Windows platform, although there is a version now out for Mac that you can check out here. You can also use Azure CLI to do the same thing. PowerShell and Azure CLI are quick and easy ways to create resources without using the Portal. I still stick with PowerShell, even though I primarily use a Mac computer for development work. (I’ll talk more about this in the next section.) Read More…

(Don’t miss Part Two and Part Three of this series!)

Early in my web development career, I always tried to avoid deployment work. It made me uneasy to watch other developers constantly bang their heads against their desks, frustrated with getting our app deployed to whatever cloud service we were using at the time. Deployment work became the “short straw” assignment because it was always a long, unpredictable and thankless task. It wasn’t until I advanced in my tech career that I realized why I felt this way.

My experience with deployment activities, up to this point, always involved a manual process. I thought that the time it took to set up an automated deployment mechanism was a lot of unnecessary overhead – I’d much rather spend my time developing the actual application and spend just a few hours every so often on a manual deployment process when I was ready. However, as I got to work with more and more experienced developers, I began to understand that a manual deployment process is slow, unreliable, unrepeatable, and rarely ever consistent across environments. A manual deployment process also requires detailed documentation that can be hard to follow and in constant need of updating.

As a result, the deployment process becomes this mysterious beast that only a few experts on your development team can tame. This will ultimately isolate the members of your development team, who could be spending more time working on features or fixing bugs related to your application or software. Although there is some initial overhead involved when creating a fully automated deployment pipeline, subsequent deployments of the same infrastructure can be done in a matter of seconds. And since validation is also baked into the automated process, your developers will only have to devote time to application deployment if something fails or goes wrong.

This three-part blog series will serve to provide a general set of instructions on how to build an automated deployment pipeline using Azure cloud services and Octopus Deploy, a user-friendly automation tool that integrates well with Azure. It might not detail out every step you need, but it will point you in the right direction, and show you the value of utilizing automated deployment mechanisms. Let’s get started. Read More…

In Part 1 of this blog series, we outlined the case for a Service Catalog driven approach for Enterprise DevOps. At AIS, after having worked with several of our Enterprise clients with their DevOps journey, we harvested our learnings in the form of a fully managed service catalog – AIS Service Catalog. AIS Service Catalog is a Microsoft Azure focused SaaS offering that is designed to jumpstart your DevOps efforts by bringing you the benefits of the service catalog driven approach.

ASC

A Service Catalog for Microsoft Azure

In Part 1 of this blog post series, we identified four key features of a Service Catalog that are fundamental to establishing DevOps in an enterprise. Let us briefly talk about how AIS Service Catalog realizes these features using Microsoft Azure specific building blocks. Read More…