As we think about services that Azure can offer, we often think about apps (e.g., App Services, AKS, and Service Fabric) and data (e.g., Azure Storage, Data Bricks, and Azure Data Lake). It turns out that you can also leverage Azure purely as a Network-as-a-Service (NaaS) –Network is a basic building block for all the app and data services in Azure. But, by NaaS, I am explicitly talking about leveraging Azure networking in a *standalone* manner. Allow me to explain what I mean:  In the picture below, you will see a representation of the Azure global footprint. It is so vast that it includes 100K+ miles of fiber and subsea cables, and 130 edge locations connecting over 50 regions worldwide. Think of NaaS as a way to tap into Azure’s global infrastructure to improve network performance and resilience of your applications, regardless of whether the apps are hosted in Azure or not.

Azure's Global Footprint

Let’s discuss two specific Azure Services that offer NaaS capabilities. Note, there are other services like Azure Firewall – think firewall as a service – that can fall in the NaaS category. However, I am limiting my discussion to two services – Azure Front Door and Azure Virtual WAN. In my opinion, these services closely align with a focus on leveraging Azure network infrastructure and services in a standalone manner.

Azure Front Door Service Icon

Azure Front Door Service

Azure Front Door service allows you to define global routing for your applications that optimize performance and resilience. Front Door is a layer 7 (HTTP/HTTPS) service. Please refer to the diagram below for a high-level view of how Front Door works – you can advertise your application’s URL using the anycast protocol. This way, traffic directed towards your application will get picked up by the “closest” Azure Front Door and routed to your application hosted in Azure on-premises – for applications hosted outside of Azure, the traffic will traverse the Azure network to the point of exit closest to the location of the app.

The primary benefit of using Azure Front Door is to improve the network performance by routing over the Azure backbone (instead of the long-haul public internet). It turns out there are several secondary benefits to highlight: You can increase the reliability of your application by having Front Door provide instant failover to a backup location. Azure Front Door uses smart health probes to check for the health of your application. Additionally, Front Door offers SSL termination and certificate management, application security via Web Application Firewall, and URL based routing.

Routing Azure Front Door

Azure Virtual WAN

Azure Virtual WAN

Azure Virtual WAN offers branch connectivity to, and through, Azure. In essence, think of Azure regions as hubs, that along with the Azure network backbone can help you establish branch-to-VNet and branch-to-branch connectivity.

You are probably wondering how Virtual WAN relates to existing cloud connectivity options like point-to-site, site-to-site, and express route. Azure WAN brings together the above connectivity options into a single operational interface.

Azure Virtual Wan Branch Connectivity

The following diagram illustrates a client’s virtual network overlay over the Azure backbone. The Azure WAN virtual hub is located in the Western Europe region. The virtual hub is a managed virtual network, and in turn, enables connectivity to VNets in Western Europe (VNetA and VNetB) and an on-premises branch office (testsite1) connected via site-to-site VPN tunnel over IPSec. An important thing to note is that the site-to-site connection is hooked to the virtual hub and *not* directly to the VNet (as is the case with a virtual network gateway).

Virtual Network Overlay Azure Backbone

Finally, you can work with one of many Azure WAN partners to automate the site-to-site connection including setting up the branch device (VPN/SD-WAN – software defined wide area network) that automates the connectivity setup with Azure.

In this episode of the Azure Government video series, Steve Michelotti sits down with AIS’ very own Vishwas Lele to discuss migrating and modernizing with Kubernetes on Azure Government. You’ll learn about the traditional approaches for migrating workloads to the cloud, including:

1. Rehost
2. Refactor
3. Reimagine

You will also learn how Kubernetes provides an opportunity to fundamentally rethink these traditional approaches to cloud migration by leveraging Kubernetes in order to get the “best of all worlds” in the migration journey. If you’re looking to migrate your existing legacy workloads to the cloud, while minimizing code changes and taking advantage of innovative cloud-native technologies, this is the video you should watch!

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

I just returned from Microsoft BUILD 2019 where I presented a session on Azure Kubernetes Services (AKS) and Cosmos. Thanks to everyone who attended. We had excellent attendance – the room was full! I like to think that the audience was there for the speaker 😊 but I’m sure the audience interest is a clear reflection of how popular AKS and Cosmos DB are becoming.

For those looking for a 2-minute overview, here it is:

In a nutshell, the focus was to discuss the combining Cloud-Native Service (like AKS) and a Managed Database

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

We started with a discussion of Cloud-Native Apps, along with a quick introduction to AKS and Cosmos. We quickly transitioned into stateful app considerations and talked about new stateful capabilities in Kubernetes including PV, PVC, Stateful Sets, CSI, and Operators. While these capabilities represent significant progress, they don’t match up with external services like Cosmos DB.

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Cloud Native Tooling

One option is to use Open Service Broker – It allows Kubernetes hosted services to talk to external services using cloud-native tooling like svcat (Service Catalog).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck svcat

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck SRE

External services like Cosmos DB can go beyond cluster SRE and offer “turn-key” SRE in essence – Specifically, geo-replication, API-based scaling, and even multi-master writes (eliminating the need to failover).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Mutli Master Support

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Configure Regions

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Portability

Since the Open Service Broker is an open specification, your app remains mostly portable even when you move to one cloud provider to another. OpenService Broker does not deal with syntactic differences, say connection string prefix difference between cloud providers.  One way to handle these differences is to use Helm.

Learn more about my BUILD session:

Here you can find the complete recording of the session and slide deck: https://mybuild.techcommunity.microsoft.com/sessions/77138?source=sessions#top-anchor

Additionally, you can find the code for the sample I used here: https://github.com/vlele/build2019 

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

As developers, we spend a lot of time developing APIs. Sometimes it’s to expose data that we’ve transformed or to ingest data from other sources. Coincidentally, more and more companies are jumping into the realm of API Management—Microsoft, Google, MuleSoft and Kong all have products now that provide this functionality. With this much investment from the big players in the tech industry, API management is obviously a priority. Now, why would anyone want to use an API Management tool?

The answer is simple: It allows you to create an API Gateway that you can load all your APIs into, providing a single source to query and curate. API Management makes life as an admin, a developer, and a consumer easier by providing everything for you in one package.

Azure API Management

Azure API Management logoWhat does Azure API Management provide? Azure API Management (APIM) is a cloud-based PaaS offering available in both commercial Azure and Azure Government. APIM provides a one-stop-shop for API authority, with the ability to create products, enforce policies, and utilize a robust developer portal.

Not only can API Management integrate seamlessly with your existing Azure infrastructure, but it can also manage APIs that exist on-prem and in other clouds. APIM is also available in both the IL4 and IL5 environments in Azure Government, which allows for extensibility and management for those working in the public sector.

APIM leverages a few key concepts to provide its functionality to you as a developer, including:

  • Products
  • Policies
  • Developer Portal

From providing security to leveraging rate-limiting and abstraction, Azure API Management does it all for API consolidation and governance in Azure. Any API can be ingested, and it gets even easier when APIs follow the OpenAPI Format.

What Are Products?

Products are a layer of abstraction provided inside APIM. Products allow you to create subsets of APIs that are already ingested into the solution—allowing you to overlap the use of APIs while restricting the use of individual collections of APIs. This level of compartmentalization allows you to not only separate your APIs into logical buckets but also enforce rules on these products separately, providing one more layer of control.

Product access is very similar to Azure RBAC—with different groups created inside of the APIM instance. These groups are yet another way for APIM admins to encapsulate and protect their APIs, allowing them to add users already associated to the APIM instance into separate subsets. Users can also be members of multiple groups, so admins can make sure the right people have access to the right APIs stored in their APIM instance.

What Are Policies?

Policies are APIM’s way of enforcing certain restrictions and providing a more granular level of control. There is an entire breadth of policies available in APIM, which range from simply disallowing usage of the API after calling it five times, to authentication, logging, caching, and transformation of requests or responses from JSON to XML and vice versa. Policies are perhaps the most powerful function of APIM and drive the control that everyone wants and need. Policies are written in XML and can be easily edited within the APIM XML Editor. Policies can also leverage C# 7 Syntax, which brings the power of the .NET Framework to your APIM governance.

What Is the Developer Portal?

The Azure API Management Developer Portal is an improved version of the Swagger documentation that’s generated when you use the OpenAPI spec. The Developer Portal provides an area for developers to readily see APIs, products, and associated applications. The Portal also provides sample request bodies (no more guessing API request structures!) and responses, along with code samples in many different languages.

Finally, the portal also allows you to try API calls with customized request bodies and headers, so you have the ability to see exactly what kind of call you want to make. Along with all that functionality, you can also download your own copy of the OpenAPI Spec for your API after it’s been ingested into your instance.

Why Should I Use APIM?

Every business should be using some form of API Management. You’ll be providing yourself a level of control previously not available. By deploying an API Gateway, that extra layer of abstraction allows for much tighter control of your APIs. Once an API has been ingested, APIM provides many additional functionalities.

First, you can match APIs to products, providing a greater level of compartmentalization. Second, you can add different groups to each product, with groups being subsets of users (i.e. Back-end Devs, Billing Devs, etc.). Third, you automatically generate a robust developer portal, which provides all of the functionality of the Swagger portal, but with added features, such as code snippets.  Finally, APIM also has complete integration with Application Insights in commercial Azure, providing access to a world-class logging and visualization tool.

Azure API Management brings power to the user, and no API should be left out.

ServiceNow Logo
ServiceNow is
a sophisticated information technology tool and one that can be easily underestimated.   

Looking back, I initially thought ServiceNow was limited to a helpdesk front-end intake tool.  

But lately, I’ve realized that while becoming the leading request intake tool, ServiceNow must have also recognized its advantage in offering additional capabilities sought by organizations of all sizes. Perhaps ServiceNow came to realizthat aggregations of problem-related requests may be viewed from a different perspective: As an excellent barometer for latent pain-points. This prompted ServiceNow to pursue larger goals.  

Inside/Out

Delving into ServiceNow permitted me to easily write code within SharePoint that reaches into ServiceNow to obtain or modify data in its tables. Additionally, I was also able to write code that let me reach outward from inside ServiceNow and obtain or modify data that resides in SharePoint.  

This two-way data traffic was easily accomplished through the magic of REST and the appropriate sprinkle of user authentication. Any environment enabled with REST extensions would work just as well.    

So ServiceNow is highly “integrate-able,” as we can easily extend outward from inside it, as well as extend inward from virtually any environment. Add in the fact that ServiceNow is a Platform as a Service (PAAS), and my eyes start widening with a feeling that I’ve stumbled upon an extensible business-support tool that might be the long-awaited game changer regarding how transactional work is organized, managed, and used to determine worthwhile projects, driven by demonstrated and specific business needs.      

What’s Next

My expedition into the ServiceNow platform is still fairly new but it makes me stand back when I hear that some government organization leaders are considering replacing certain (perhaps many) SharePoint uses with the evolving tools offered by ServiceNow. Many large organizations have already adopted it, and the company has experienced extraordinary growth from its birth in 2003.  In just 16 years, ServiceNow has grown to 6,000 employees worldwide with revenue of over $2.6 billion.    

My experience using ServiceNow from the perspective of a full administrator has been extremely positive. As an administrator, I’ve been able to access anything I need through the ease of highly-simplified navigation.  The vertical navigation segment lets me expand by clicking on high-level items. Alternatively, the navigator allows me to filter via a field that eliminates anything that doesn’t match the string I’m keying and surfaces items from the hierarchy for anything that does match. The result is a stunningly quick way to find any table, workflow, or tool built into ServiceNow for administrators. In fact, working as an administrator has been much more intuitive than entering a ServiceNow ticket for any specific request.  

 

There are many times when you want to move to Azure but don’t have the liberty of re-platforming the database due to the dependency of applications. Don’t despair, there are still ways to move your workload. Azure has support for Oracle! We previously showed a way to almost run an Oracle Database as a Service on Azure, but if you have a line of business (LOB) application or an application that requires support for high availability, it is also possible:  Presenting Azure support for Oracle disaster recovery including Oracle Data Guard.

Oracle Support on Azure

Oracle supports running Oracle DB 12.1 Standard and Enterprise editions in Azure on virtual machine (VM) images based on Oracle Linux.   Oracle has guaranteed license mobility from on-premises to Azure. “These images are considered “Bring Your Own License” and as such you will only be charged for compute, storage, and networking costs incurred by running a VM. It is assumed you are properly licensed to use Oracle software and that you have a current support agreement in place with Oracle.

Licensing Details

Microsoft Azure is an ‘Authorized Cloud Environment.’   Under this program, “count two vCPUs as equivalent to one Oracle Processor license if hyper-threading is enabled, and one vCPU as equivalent to one Oracle Processor license if hyper-threading is not enabled.” Please note that unlimited license agreements (ULAs) may also be used.

High Availability

Any conversation about this topic requires us to look at the recovery time objective (RTO) and recovery point objective (RPO) of an application. In our application, this is how long the database can be down as well as the maximum amount of data loss that can be tolerated. Oracle defines well-known reference architectures based on RTO & RPO objectives.

Assuming we require comprehensive high availability (HA) and disaster recovery (DR), real-time failover and zero/near-zero data loss, an implementation using Oracle Data Guard will be pursued.

Oracle Data Guard

Data Guard is one offering from Oracle that ensures high availability, data protection, and disaster recovery for enterprise data. It uses a standby database (exact replica) to survive outages of any kind and data corruptions. This technology is available in Oracle Enterprise Edition.   Oracle also provides, as a separately licensed product, Oracle Active Data Guard; this technology allows for real-time data protection and disaster recovery and expands capabilities.

A few key features from Oracle regarding Oracle Data Guard are:

  • Fast redo transport for best recovery point objective (RPO), fast apply performance for best recovery time objective (RTO)
  • Fast failover to a standby database to maintain availability should the primary database fail for any reason
  • Automatic or automated (depending upon configuration) re-synchronization of a failed primary database, quickly converting it to a synchronized standby database after a failover occurs.
  • Reduction of planned downtime by utilizing a standby database to perform maintenance in a rolling fashion

Oracle Real Application Clusters (RAC) was also considered, but it is not supported on any cloud environment other than Oracle’s.

Oracle Data Guard on Azure

Microsoft recommends — for the best performance of Oracle DB production workloads on Azure — to be sure to properly size the VM image and use Managed Disks that are backed by Premium Storage.

Azure provides the M-Series virtual machines that are ideal for extremely large databases or other applications that benefit from high vCPU counts and large amounts of memory.

Azure also provides a Managed Disk offering called Ultra SSD that can scale performance up to 160,000 IOPS. IOPS are “an input/output performance measurement used to characterize computer storage devices.” Database performance is often constrained by the performance of the underlying storage; therefore, utilizing the Ultra SSD offering, you can be sure to minimize this concern.

Another key concept to enable this high availability scenario is to leverage Azure availability sets for the virtual machines. Availability sets are a fundamental technology that makes our scenario possible.

  • Each virtual machine in your availability set is assigned an update domain and a fault domain by the underlying Azure platform.
  • Update domains allow for a virtual machine to recover before maintenance on another virtual machine is initiated on a different update domain.
  • Fault domains define the group of virtual machines that share a common power source and network switch.
  • Managed disks provide better reliability for availability sets by ensuring that the disks of VMs in an availability set are sufficiently isolated from each other to avoid single points of failure.

Availability zones, an alternative to availability sets, expand the level of control you have to maintain the availability of the database VMs. With Availability Zones, Azure offers a 99.99% VM uptime Service Level Agreement (SLA).

Lastly, Azure Site Recovery (ASR) provides an additional level of disaster recovery via its ability to orchestrate replication, perform disaster recovery testing, and run failovers and failback; ASR is fully compatible with Oracle Data Guard on Azure.

A logical architecture using a virtual network (VNET) and subnets (isolation & projection), Oracle DB VMs, premium storage, and Oracle Data Guard is presented below.

Diagram of VM architecture

AIS recently participated in a joint proof of concept (POC) where this architecture was deployed with M128 VMs and premium storage and achieved the following goals:

  • ~35TB Database (Oracle Table Limit)
  • ~30-45 second failover under user load of 5000- 15,000 concurrent users with Zero Data Loss
  • Active Data Guard Running in Max Availability (SYNC, Successful Failover)
  • Can handle both planned and unplanned failovers
  • Achieve 120,000 IOPS
  • Can be further secured using Network Security Groups (NSG) and Application Security Groups (ASG)

Visit our website to find out more about our work with Microsoft Azure, or contact AIS today to get started with Oracle DB on Azure!

Let’s say you are trying to move an Oracle database to Azure, but don’t want to go down the route of creating an Oracle Database in an Azure VM for obvious reasons: You don’t want to be responsible for maintaining the VM’s availability, hotfixes patching, etc. At the same time, let’s say you do want to take advantage of a fully-managed persistence service that offers local and geo-redundancy and the ability to create snapshots to protect against accidental deletes or data corruption.

It turns out that the latest advancements in Azure Container Instances (ACI), combined with the ability to deploy them in a VNET, can get you close.

Let’s start by reviewing the architecture.

Architecture diagram

We can host an Oracle DB container image inside Azure Container Instances (ACI). ACI is a container-as-a-service offering that removes the need to manage the underlying virtual machines. It also eliminates the need for setting up our orchestrator. Additionally, ACI-hosted containers (Linux only for now) are placed in a delegated subnet. This allows the Azure container instance to be available from inside a VNET without the need to open a public endpoint.

Finally, the data files (persistent) aspect of the database resides in Azure Files, which removes the need to manage our durable storage since Azure Files takes care of the local and geo-redundancy. Additionally, Azure Files can take snapshots, allowing us a point-in-time restore ability.

(Azure Files also support Virtual Network service endpoints that allow for locking down access to the resources within the VNET.)

ACI also offers fast start times, plus policy-based automatic restarting of the container upon failure.

Here are the three steps to get this setup working:

Step One

Create the ACI hosting Oracle Database Server 12.2.01 that mounts an Azure File share and is connected to a delegated subnet.

az container create -g <> --name <> --image registry-1.docker.io/store/oracle/database-enterprise:12.2.0.1 --registry-username <> --registry-password <> --ports 1521 5500 --memory 8 --cpu 2 --azure-file-volume-account-name <> --azure-file-volume-account-key <> --azure-file-volume-share-name <> --azure-file-volume-mount-path /ORCL --vnet <> --vnet-address-prefix <> --subnet <> --subnet-address-prefix <>

Step Two

This step is a workaround that could be eliminated if we had access to the Docker file used to create this image.  We are essentially copying /oradata files containing the control files, data files, etc. to the Azure file share.

mkdir -p /u02/app/oracle/oradata/ORCL; cp -r /u02/app/oracle/oradata/ORCLCDB/. /u02/app/oracle/oradata/ORCL/

Step Three

Connect to the Oracle database from the VNET.

Since the Oracle DB container is created in a VNET, a private IP address is assigned to the container.  We can use this IP to connect to it from inside the VNET.

That’s it! We now have an Oracle database without the need to maintain the underlying VM or data volume.

Let’s Talk Pricing

Azure Container Instances bill per second at the “container” group level. “Container group” resources like vCPU/ Memory are shared across multiple containers sharing the same host

The current pricing per second is listed below:

Container group duration:
Memory: $0.0000015 per GB-s
vCPU: $0.0000135 per vCPU-s

The setup we defined above (8GB memory and two vCPUs) will cost ~$100/month based on the following pricing calculation:

Memory duration:

Number of container groups * memory duration (seconds) * GB * price per GB-s * number of days

1 container group * 86,400 seconds * 8 GB * $0.0000015 per GB-s * 30 days = $31.1

vCPU duration:

Number of container groups * vCPU duration (seconds) * vCPU(s) * price per vCPU-s * number of days

1 container groups * 86,400 seconds * 1 vCPU * $0.0000135 per vCPU-s * 30 days = $69.98

Total billing:

Memory duration (seconds) + vCPU duration (seconds) = total cost

$31.1 + $69.98 = $101 per month

Almost! But not completely there!

As the title suggests, the approach mentioned above gets us close to our objective of “Oracle DB as a service on Azure” but we are not all the way there. I would be remiss not to mention some of the challenges that remain.

Our setup is resilient to failure (e.g., policy-based restart) but this setup does not offer us high availability. For that, you will have to rely on setting up something like the Oracle Data Guard on Azure.

ACI supports horizontal scaling and as such the vertical scaling options are limited to current ACI limits (16GB and four vCPU).

ACI VNET Integration capability has some networking limits around outbound NSGs and public peering that you need to aware of.

I’d like to thank Manish Agarwal and his team for help with this setup.

Kubernetes logoIf you’ve worked in software development or IT for any amount of time, chances are you’ve at least heard about containers…and maybe even Kubernetes.

Maybe you’ve heard that Google manages to spin up two billion containers a week to support their various services, or that Netflix runs its streaming, recommendation, and content systems on a container orchestration platform called Titus.

This is all very exciting stuff, but I’m more excited to write and talk about these things now more than ever before, for one simple reason: We are finally at a point where these technologies can make our lives as developers and IT professionals easier!

And even better…you no longer have to be a Google (or one of the other giants) employee to have a practical opportunity to use them.

Containers

Before getting into orchestrators and what they actually offer, let’s briefly discuss the fundamental piece of technology that all of this is depends on – the container itself.

A container is a digital package of sorts, and it includes everything needed to run a piece of software.  By “everything,” I mean the application code, any required configuration settings, and the system tools that are normally brought to the table by a computer’s operating system. With those three pieces, you have a digital package that can run a software application in isolation across different computing platforms because the dependencies are included in that package.

And there is one more feature that makes containers really useful – the ability to snapshot the state of a container at any point. This snapshot is called a container “image.” Think of it in the same way you would normally think of a virtual machine image, except that many of the complexities of capturing the current state of a full-blown machine image (state of the OS, consistency of attached disks at the time of the snapshot, etc.) are not present in this snapshot.  Only the components needed to run the software are present, so one or a million instances can be spun-up directly from that image, and they should not interfere with each other.  These “instances” are the actual running containers.

So why is that important? Well, we’ve just alluded to one reason: Containers can run software across different operating systems (various Linux distributions, Windows, Mac OS, etc.).  You can build a package once and run it in many different places. It should seem pretty obvious at this point, but in this way, containers are a great mechanism for application packaging and deployment.

To build on this point, containers are also a great way to distribute your packages as a developer.  I can build my application on my development machine, create a container image that includes the application and everything it needs to run, and push that image to a remote location (typically called a container registry) where it can be downloaded and turned into one or more running instances.

I said that you can package everything your container needs to run successfully, but the last point to make is that the nature of the container package gives you a way to enforce a clear boundary for your application and a way to enforce runtime isolation. This feature is important when you’re running a mix of various applications and tools…and you want to make sure a rogue process built or run by someone else doesn’t interfere with the operation of your application.

Container Orchestrators

So containers came along and provided a bunch of great benefits for me as a developer.  However, what if I start building an application, and then I realize that I need a way to organize and run multiple instances of my container at runtime to address the expected demand?  Or better yet, if I’m building a system comprised of multiple microservices that all need their own container instances running?  Do I have to figure out a way to maintain the desired state of this system that’s really a dynamic collection of container instances?

This is where container orchestration comes in.  A container orchestrator is a tool to help manage how your container instances are created, scaled, managed at runtime, placed on underlying infrastructure, communicate with each other, etc.  The “underlying infrastructure” is a fleet of one or more servers that the orchestrator manages – the cluster.  Ultimately, the orchestrator helps manage the complexity of taking your container-based, in-development applications to a more robust platform.

Typically, interaction with an orchestrator occurs through a well-defined API, and the orchestrator takes up the tasks of creating, deploying, and networking your container instances – exactly as you’ve specified in your API calls across any container host (servers included in the cluster).

Using these fundamental components, orchestrators provide a unified compute layer on top of a fleet of machines that allows you to decouple your application from these machines. And the best orchestrators go one step further and allow you to specify how your application should be composed, thus taking the responsibility of running the application and maintaining the correct runtime configuration…even when unexpected events occur.

VIEW OUR AZURE CAPABILITIES
Since 2009, AIS has been working with Azure honing our capabilities and offerings. View the overview of our Azure-specific services and offerings.

Kubernetes

Kubernetes is a container orchestrator that delivers the capabilities mentioned above. (The name “Kubernetes” comes from the Greek term for “pilot” or “helmsman of a ship.”) Currently, it is the most popular container orchestrator in the industry.

Kubernetes was originally developed by Google, based in part on the lessons learned from developing their internal cluster management and scheduling system Borg.  In 2014, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) which open-sourced the project to encourage community involvement in its development. The CNCF is a child entity of the Linux Foundation and operates as a “vendor-neutral” governance group. Kubernetes is now consistently in the top ten open source projects based on total contributors.

Many in the industry say that Kubernetes has “won” the mindshare battle for container orchestrators, but what gives Kubernetes such a compelling value proposition?  Well, beyond meeting the capabilities mentioned above regarding what an orchestrator “should” do, the following points also illustrate what makes Kubernetes stand out:

  • The largest ecosystem of self-driven contributors and users of any orchestrator technology facilitated by CNCF, GitHub, etc.
  • Extensive client application platform support, including Go, Python, Java, .NET, Ruby, and many others.
  • The ability to deploy clusters across on-premises or the cloud, including native, managed offerings across the major public cloud providers (AWS, GCP, Azure). In fact, you can use the SAME API with any deployment of Kubernetes!
  • Diverse workload support with extensive community examples – stateless and stateful, batch, analytics, etc.
  • Resiliency – Kubernetes is a loosely-coupled collection of components centered around deploying, maintaining and scaling workloads.
  • Self-healing – Kubernetes works as an engine for resolving state by converging the actual and the desired state of the system.

Kubernetes Architecture

A Kubernetes cluster will always include a “master” and one or more “workers”.  The master is a collection of processes that manage the cluster, and these processes are deployed on a master node or multiple master nodes for High Availability (HA).  Included in these processes are:

  • The API server (Kube-apiserver), a distributed key-store for the persistence of cluster management data (etcd)
  • The core control loops for monitoring existing state and management of desired state (Kube-controller-manager)
  • The core control loops that allow specific cloud platform integration (Cloud-controller-manager)
  • A scheduler component for the deployment of Kubernetes container groups, known as pods (Kube-scheduler)

Worker nodes are responsible for actually running the container instances within the cluster.  In comparison, worker nodes are simpler in that they receive instructions from the master and set out serving up containers.  On the worker node itself, there are three main components installed which make it a worker node in a Kubernetes cluster: an agent called kubelet that identifies the node and communicates with the master, a network proxy for interfacing with the cluster network stack (kube-proxy), and a plug-in interface that allows kubelet to use a variety of container runtimes, called the container runtime interface.

diagram of Kubernetes architecture

Image source

Managed Kubernetes and Azure Kubernetes Service

“Managed Kubernetes” is a deceptively broad term that describes a scenario where a public cloud provider (Microsoft, Amazon, Google, etc.) goes a step beyond simply hosting your Kubernetes clusters in virtual machines to take responsibility for deploying and managing your cluster for you.  Or more accurately, they will manage portions of your cluster for you.  I say “deceptively” broad for this reason – the portions that are “managed” varies by vendor.

The idea is that the cloud provider is:

  1. Experienced at managing infrastructure at scale and can leverage tools and processes most individuals or companies can’t.
  2. Experienced at managing Kubernetes specifically, and can leverage dedicated engineering and support teams.
  3. Can add additional value by providing supporting services on the cloud platform.

In this model, the provider does things like abstracting the need to operate the underlying virtual machines in a cluster, providing automation for actions like scaling a cluster, upgrading to new versions of Kubernetes, etc.

So the advantage for you, as a developer, is that you can focus more of your attention on building the software that will run on top of the cluster, instead of on managing your Kubernetes cluster, patching it, providing HA, etc. Additionally, the provider will often offer complementary services you can leverage like a private container registry service, tools for monitoring your containers in the cluster, etc.

Microsoft Azure offers the Azure Kubernetes Service (AKS), which is Azure’s managed Kubernetes offering. AKS allows full production-grade Kubernetes clusters to be provisioned through the Azure portal or automation scripts (ARM, PowerShell, CLI, or combination).  Key components of the cluster provisioned through the service include:

  • A fully-managed, highly-available Master. There’s no need to run a separate virtual machine(s) for the master component.  The service provides this for you.
  • Automated provisioning of worker nodes – deployed as Virtual Machines in a dedicated Azure resource group.
  • Automated cluster node upgrades (Kubernetes version).
  • Cluster scaling through auto-scale or automation scripts.
  • CNCF certification as a compliant managed Kubernetes service. This means it leverages the Cloud-controller-manager standard discussed above, and its implementation is endorsed by the CNCF.
  • Integration with supporting Azure services including Azure Virtual Networks, Azure Storage, Azure Role-Based Access Control (RBAC), and Azure Container Registry.
  • Integrated logging for apps, nodes, and controllers.

Conclusion

The world of containers continues to evolve, and orchestration is an important consideration when deploying your container-based applications to environments beyond “development.”  While not simple, Kubernetes is a very popular choice for container orchestration and has extremely strong community support.  The evolution of managed Kubernetes makes using this platform more realistic than ever for developers (or businesses) interested in focusing on “shipping” software.

I recently encountered an issue when trying to create an Exact Age column for a contact in Microsoft Dynamics CRM. There were several solutions available on the internet, but none of them was a good match for my specific situation. Some ideas I explored included:

  1. Creating a calculated field using the formula DiffInDays(DOB, Now()) / 365 or DiffInYears(DOB, Now()) – I used this at first, but if the calculated field is a decimal type, then you end up with a value like 23 years old which is not desirable. If the calculated field is a whole number type, then the value is always the rounded value. So, if the DOB is 2/1/1972 and the current date is 1/1/2019, the Age will be 47 when the contact is actually still 46 until 2/1/2019.
  2. Using JavaScript to calculate the Age – The problem with this approach is that if the record is not saved, then the data becomes stale. This one also does not work with a view (i.e., if you want to see a list of client ages). The JavaScript solution seems more geared towards the form of UI experience only.
  3. Using Workflows with Timeouts – This approach seemed a bit complicated and cumbersome to update values daily across so many records.

Determining Exact Age

Instead, I decided to plug some of the age scenarios into Microsoft Excel and simulate Dynamic CRM’s calculations to see if I could come up with any ideas.

Note: 365.25 is used to account for leap years. I originally used 365, but the data was incorrect. After reading about leap years, I decided to plug 365.25 in, and everything lined up.

Excel Formulas

Setting up the formulas above, I was able to calculate the values below. I found that subtracting the DATEDIF Rounded value from the DATEDIF Actual value produced a negative value when the month/day was after the current date (2/16/2019 at the time). This allowed me to introduce a factor of -1 when the Difference was less than or equal to 0.  Using this finding, I set up the solution in CRM.

Excel Calculations

The Solution

  1. Create the necessary fields.
    Field  Data Type  Field Type  Other  Formula 
    DOB  Date and Time  Simple  Behavior: User Local   
    Age Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(new_dob, Now()) / 365.25 
    Age Rounded  Whole Number  Calculated    DiffInDays(new_dob, Now()) / 365.25 
    Age Difference  Decimal Number  Calculated  Precision: 10  new_ageactual – new_agerounded 
    Age  Whole Number  Calculated    See below 
  1. Create a business rule for DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Age calculated field as follow:
    Age Calculated Field

Once these three steps have been completed, your new Age field should be ready to use. I created a view to verify the calculations. I happened to be writing this post very late on the night of 2/16/2019. I wrote the first part before 12:00 a.m., then I refreshed the view before taking the screenshot below. I was happy to see Age Test 3 record flip from 46 to 47 when I refreshed after 12:00 a.m.

Age Solution Results

Determining Exact Age at Some Date in the Future

The requirement that drove my research for this solution was the need to determine the exact age in the future. Our client needed to know the age of a traveler on the date of travel. Depending on the country being visited and the age of the traveler on the date of departure, different forms would need to be sent in order to prevent problems when the traveler arrived at his or her destination. The solution was very similar to the Age example above:

The Solution

  1. Here is an overview of the entity hierarchy:
    Age at Travel Entities
  2. Create the necessary fields.
    Entity  Field  Data Type  Field Type  Other  Formula 
    Trip  Start Date  Date and Time  Simple  Behavior: User Local   
    Contact  DOB  Date and Time  Simple  Behavior: User Local   
    Trip Contact  Age at Travel Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Rounded  Whole Number  Calculated  n/a  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Difference  Decimal Number  Calculated  Precision: 10  new_ageattravelactual – new_ageattravelrounded 
    Trip Contact  Age at Travel  Whole Number  Calculated  n/a  See below 
  1. Create a business rule for Contact DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Trip Contact’s Age at Travel calculated field as follow:
    Age at Travel Calculated Field

Once these steps have been completed, your new Age at Travel field should be ready to use. I created a view to verify the calculations.

You’ll notice that in the red example, the trip starts on 8/14/2020. The contact was born on 9/29/2003 and is 16 on the date of travel but turns 17 a month or so later. In the green example, the trip is also on 8/14/2020. The contact was born 4/12/2008 and will turn 12 before the date of travel.

Age at Travel Solution Results

Conclusion

While there are several approaches to the Age issue in Dynamics CRM, this is a great alternative that requires no code and works in real time. I hope you find it useful!

kubernetes logoToday, let’s talk about network isolation and traffic policy within the context of Kubernetes.

Network Policy Specification

Kubernetes’ first-class notion of networking policy allows a customer to determine which pods are allowed to talk to other pods. While these policies are part of Kubernetes’ specification, tools like Calico and Cilium implement these network policies.

Here is a simple example of a network policy:

...
  ingress:
  - from:
    - podSelector:
        matchLabels:
          zone: trusted
  ...

In the above example, only pods with the label zone: trusted are allowed to make an incoming visit to the pod.

egress:
  - action: deny
    destination:
      notSelector: ns == 'gateway’

The above example deals with outgoing traffic. This network policy will ensure that traffic going out is blocked unless the destination is a node with the label ‘gateway’.

As you can see, network policies are important for isolating pods from each other in order to avoid leaking information between applications. However, if you are dealing with data that requires higher trust levels, you may want to consider isolating the applications at the cluster level. The following diagrams depict both logical (network policy based) and physical (isolated) clusters.

Diagram of a Prod Cluster Diagrams of Prod Team Clusters

Network Policy is NOT Traffic Routing…Enter Istio!

Network policies, however, do not allow us to control the flow of traffic on a granular level. For example, let’s assume that we have three versions of a “reviews” service (a service that returns user reviews for a given product). If we want the ability to route the traffic to any of these three versions dynamically, we will need to rely on something else. In this case, let’s use the traffic routing provided by Istio.

Istio is a tool that manages the traffic flow across services using two primary components:

  1. An Envoy proxy (more on Envoy later in the post) distributes traffic based on a set of rules.
  2. The Pilot manages and configures the traffic rules that let you specify how traffic should be routed.

Diagram of Istio Traffic Management

image source

Here is an example of Istio policy that directs all traffic to the V1 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

Here is a Kiali Console view of all “live” traffic being sent to the V1 version of the “reviews” service:

Kiali console screenshot

Now here’s an example of Istio policy that directs all traffic to the V3 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v3

And here is a Kiali Console view of all “live” traffic being sent to the V3 version of the “reviews” service:

Kiali console screenshot v3

Envoy Proxy

Envoy is a lightweight proxy with powerful routing constructs. In the example above, the Envoy proxy is placed as a “sidecar” to our services (product page and reviews) and allows it to handle outbound traffic. Envoy could dynamically route all outbound calls from a product page to the appropriate version of the “reviews” service.

We already know that Istio makes it simple for us to configure the traffic routing policies in one place (via the Pilot). But Istio also makes it simple to inject the Envoy proxy as a sidecar. The following Kubectl command labels the namespace for automatic sidecar injection:

#--> Enable Side Car Injection
kubectl label namespace bookinfo istio-injection=enabled

As you can see each pod has two containers ( service and the Envoy proxy):

# Get all pods 
kubectl get pods --namespace=bookinfo

I hope this blog post helps you think about traffic routing between Kubernetes pods using Istio and Envoy. In future blog posts, we’ll explore the other facets of a “service mesh” – a common substrate for managing a large number of services, with traffic routing being just one facet of a service mesh.