Grassroots Crisis Intervention logoAs I was nearing retirement from the Air Force, I received briefings reminding me that the world outside of the military is nothing like the one inside of it. It’s a dog eat dog world out there; companies only care about their bottom line; companies won’t help anyone else unless it’s beneficial for them, etc.

When I came onboard with AIS, however, I was told that our Managed Services practice provides pro-bono support for the Grassroots Crisis Intervention Center. I was taken aback to learn that AIS is the total opposite of what I’d been taught. This company really cares for the community — and doesn’t just say it, but proves it.

About Grassroots

Grassroots Crisis Intervention Center is the only homeless shelter in Columbia, Maryland that provides beds and round-the-clock support for those in need of professional crisis intervention, suicide prevention, or outreach services for personal, situational, mental health, and domestic violence crises. In 2018, they fielded 38,914 calls on their 24-Hour Crisis Intervention Hotline, served 2,028 clients in their various shelters, and ensuring the safety of 262 children. While Grassroots is a private non-profit agency and receives funding from grants, private foundations, community donations, and local businesses, it’s simply not enough to ensure their IT is ready to handle the day-to-day demands Grassroots volunteers endure.

Grassroots was lucky to have a volunteer donate their time to providing IT support. Unfortunately, there’s only so much one person can do for an organization like Grassroots. They continue to grow and face new IT challenges each day.

Providing 24/7 Support for a 24/7 Organization

AIS offered to help Grassroots’ volunteer IT person manage their needs and growth at no cost. Our Managed Services team worked with them to define their needs and bottlenecks so we could best support them. The result was an arrangement where the Grassroots volunteer IT staff handle onsite demands, while AIS Managed Services support any IT needs that can be handled remotely, 24/7/365.

In the past year, we’ve helped resolve over 100 different issues ranging from account creation/deletion and email connectivity problems to agency-wide network outages. Managed Services resolved each one of these tickets with a 100% satisfaction rate. Along with resolving tickets, the Managed Services team also provides basic IT training to Grassroots counselors to help increase their understanding of the system… so that they can quickly get back to the important work of helping people.

Service Before Self

In the Air Force, one of our core values was service before self. It’s refreshing to see that AIS is a company that puts community service ahead of self. It makes this veteran honored to work with this company and their support of the Grassroots Crisis Intervention Center.

kubernetes logoToday, let’s talk about network isolation and traffic policy within the context of Kubernetes.

Network Policy Specification

Kubernetes’ first-class notion of networking policy allows a customer to determine which pods are allowed to talk to other pods. While these policies are part of Kubernetes’ specification, tools like Calico and Cilium implement these network policies.

Here is a simple example of a network policy:

...
  ingress:
  - from:
    - podSelector:
        matchLabels:
          zone: trusted
  ...

In the above example, only pods with the label zone: trusted are allowed to make an incoming visit to the pod.

egress:
  - action: deny
    destination:
      notSelector: ns == 'gateway’

The above example deals with outgoing traffic. This network policy will ensure that traffic going out is blocked unless the destination is a node with the label ‘gateway’.

As you can see, network policies are important for isolating pods from each other in order to avoid leaking information between applications. However, if you are dealing with data that requires higher trust levels, you may want to consider isolating the applications at the cluster level. The following diagrams depict both logical (network policy based) and physical (isolated) clusters.

Diagram of a Prod Cluster Diagrams of Prod Team Clusters

Network Policy is NOT Traffic Routing…Enter Istio!

Network policies, however, do not allow us to control the flow of traffic on a granular level. For example, let’s assume that we have three versions of a “reviews” service (a service that returns user reviews for a given product). If we want the ability to route the traffic to any of these three versions dynamically, we will need to rely on something else. In this case, let’s use the traffic routing provided by Istio.

Istio is a tool that manages the traffic flow across services using two primary components:

  1. An Envoy proxy (more on Envoy later in the post) distributes traffic based on a set of rules.
  2. The Pilot manages and configures the traffic rules that let you specify how traffic should be routed.

Diagram of Istio Traffic Management

image source

Here is an example of Istio policy that directs all traffic to the V1 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

Here is a Kiali Console view of all “live” traffic being sent to the V1 version of the “reviews” service:

Kiali console screenshot

Now here’s an example of Istio policy that directs all traffic to the V3 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v3

And here is a Kiali Console view of all “live” traffic being sent to the V3 version of the “reviews” service:

Kiali console screenshot v3

Envoy Proxy

Envoy is a lightweight proxy with powerful routing constructs. In the example above, the Envoy proxy is placed as a “sidecar” to our services (product page and reviews) and allows it to handle outbound traffic. Envoy could dynamically route all outbound calls from a product page to the appropriate version of the “reviews” service.

We already know that Istio makes it simple for us to configure the traffic routing policies in one place (via the Pilot). But Istio also makes it simple to inject the Envoy proxy as a sidecar. The following Kubectl command labels the namespace for automatic sidecar injection:

#--> Enable Side Car Injection
kubectl label namespace bookinfo istio-injection=enabled

As you can see each pod has two containers ( service and the Envoy proxy):

# Get all pods 
kubectl get pods --namespace=bookinfo

I hope this blog post helps you think about traffic routing between Kubernetes pods using Istio and Envoy. In future blog posts, we’ll explore the other facets of a “service mesh” – a common substrate for managing a large number of services, with traffic routing being just one facet of a service mesh.

Define cloud apps and infrastructure in your favorite language and deploy to any cloud with Pulumi.

Pulumi logoIf you search the Internet for Infrastructure-as-Code (IaC), it’s pretty easy to come up with a list of the most popular tools: Chef, Ansible, Puppet, Terraform…and the freshman to the IaC:  PULUMI.

It’s 4 a.m. and the production server has gone down. You can’t keep calm?

Sure, how tough is it? Except that you’ll probably need to recall what you did a year ago to set up your environment, then desperately try to figure out what you’ve installed or implemented or configured since. Finally, you’ve gathered all your findings to closely replicate the environment.

Wouldn’t it be nice to have something that manages all this configuration for you? No, there aren’t robots coming to take over the DevOps team yet. I’m talking about using Infrastructure-as-Code to automatically and consistently manage infrastructure configuration.

What is Infrastructure as Code (IaC)?

As the name suggests, Infrastructure-as-Code is the concept of managing your operations environment in the same way you manage applications or other code.

Infrastructure as code simply means to convert your infrastructure into code, where it is managed by some kind of version control system (e.g., Git), and stored in a repository where you can manage it similar to your application.

Pulumi: the new IaC tool

While learning Azure, I tried implementing IaC with Azure Resource Manager Templates (aka ARM Templates). For this, I learned Powershell and wrote several templates using it. As a developer, PowerShell isn’t the language I use on a daily basis to write my code, but I use Javascript abundantly for many of my projects.

Then the internet community whispered about Pulumi.

I’ve tried my hand at Pulumi and the experience has been very enlightening, so I’m sharing some of the more important and interesting findings with you all.

Pulumi is a multi-language and multi-cloud development platform.

Pulumi supports all major clouds — including Amazon Web Services (AWS), Azure and Google Cloud, as well as Kubernetes clusters. It lets you create all aspects of cloud programs using real languages (Pulumi currently supports JavaScript, TypeScript, and Python, with more languages supported in the future) and real code, from infrastructure on up to the application itself. Just write programs and run them, and Pulumi figures out the rest.

Using real languages unlocks tremendous benefits:

  • Familiarity: no need to learn new bespoke DSLs or YAML-based templating languages.
  • Abstraction: build bigger things out of smaller things.
  • Sharing and reuse: we leverage existing language package managers to share and reuse these abstractions, either with the community, within your team, or both.
  • Full control: use the full power of your language, including async, loops, and conditionals.

My favorite things about Pulumi

  1. Multi-Language and real language: Using general-purpose programming languages reduces the learning curve and makes it easier to express your configuration requirements.
  2. Developer friendly and easily configurable: Pulumi bridges the gap between Development and Operations teams by not treating application code and infrastructure as separate things. Developers can easily list out dependencies in the package.json file. The below snippet explains:
{
   "name": "azure-javascript",  // Name of the Pulumi project
   "main": "index.js",          // start point of the Pulumi program.
   "dependencies": {            // Dependencies with version number to be
       "@pulumi/pulumi": "latest",   installed with NPM
       "@pulumi/azure": "latest",
       "azure-storage": "latest",
       "mime": "^2.4.0"
   }
}

The YAML is created while we initialize the Pulumi Stack to configure all the parameters required for the program like credentials, location, etc.

  1. Reusable Components: Thanks to having a real language, we can build higher-level abstractions.

Below is one of my example code snippets using a Pulumi component that creates an instance of the Azure Resource Group to be used in other programs. You can find the full source code that provisions Azure Load Balancer GitHub Code.

class ResourceGroup extends pulumi.ComponentResource {
    constructor(resourceGroupName, location,path, opts)
    {
    	 super("az-pulumi-createstorageaccount:ResourceGroup", resourceGroupName,location, {}, opts); 

         console.log(`Resource Group ${resourceGroupName} : location ${location} `);
    	 // Create an Azure Resource Group
		const resourceGroup = new azure.core.ResourceGroup(resourceGroupName, 
		{
		    location:location,
		},

           { 
              parent: this 
           }
        );

	   // Create a property for the resource group name that was created
        this.resourceGroupName = resourceGroup.name,
        this.location = location
        

         // For dependency tracking, register output properties for this component
        this.registerOutputs({
            resourceGroupName: this.resourceGroupName,
           
        });

    }

}


module.exports.ResourceGroup = ResourceGroup;


This class can be instantiated as below:

// import the class 
const resourceGroup = require("./create-resource-group.js");


// Create an Azure Resource Group
// Arguments : Resource group name and location
let azureResouceGroup = new resourceGroup.ResourceGroup("rgtest","EastUS");

  1. Multi-Cloud: Pulumi supports all major clouds — including AWS, Azure and Google Cloud, as well as Kubernetes clusters. This delivers a consolidated programming model and tools for managing cloud software anywhere. There’s no need to learn three different YAML dialects, and five different CLIs, just to get a simple container-based application stood up in production.

The below code uses a single Pulumi program to provision resources in both AWS and GCP (Google Cloud Platform). The example is in typescript and it is required to install @pulumi/aws and @pulumi/gcp packages from NPM.

import * as aws from "@pulumi/aws";
import * as gcp from "@pulumi/gcp";

// Create an AWS resource (S3 Bucket)
const awsBucket = new aws.s3.Bucket("my-bucket");

// Create a GCP resource (Storage Bucket)
const gcpBucket = new gcp.storage.Bucket("my-bucket");

// Export the names of the buckets
export const bucketNames = [
awsBucket.bucket,
gcpBucket.name,
];

Pulumi ensures that resources will be created in both clouds. Let’s take a look at how Pulumi creates the plan for both clouds and deploy the resources to the respective clouds.

Previewing update (multicloud-ts-buckets-dev):

Type Name Plan
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev create
+ ├─ gcp:storage:Bucket my-bucket create
+ └─ aws:s3:Bucket my-bucket create

Resources:
3 changes
+ 3 to create

Do you want to perform this update? yes
Updating (multicloud-ts-buckets-dev):

Type Name Status
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev created
+ ├─ gcp:storage:Bucket my-bucket created
+ └─ aws:s3:Bucket my-bucket created

Outputs:
bucketNames: [
[0]: "my-bucket-c819937"
[1]: "my-bucket-f722eb9"
]

Resources:
3 changes
+ 3 created

Duration: 21.713128552s

The outputs show the name of the AWS and GCP buckets respectively.

Another scenario would be to create a storage account and S3 object in Azure and AWS respectively using Pulumi.

// Creating storage account in Azure

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/azure");

const storageAccount = new azure.storage.Account(storageAccountName, {
   	resourceGroupName: rgName,
    	location: rgLocation,
    	accountTier: "Standard",
    	accountReplicationType: "LRS",
 });

// Creating S3 bucket  in AWS

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/aws");

const siteBucket = new aws.s3.Bucket("my-bucket",{
	website: {
    indexDocument: "index.html",
  }
});

Pulumi enables you to mix and match these cloud resources inside of the same or different program or file.

  1. Stacks: A core concept in Pulumi is the idea of a “stack.” A stack is an isolated instance of your cloud program whose resources and configuration are distinct from all other stacks. You might have a stack each for production, staging, and testing, or perhaps for each single-tenanted environment. Pulumi’s CLI makes it trivial to spin up and tear down lots of stacks.

Closing Thoughts

I would like to close this post with a statement: Cloud Renaissance for DevOps and Developers as called by the whole internet community. Building powerful cloud software will be more enjoyable, more productive, and more collaborative for the developers. Of course, everything comes with a cost: after exploring, I found that Pulumi lacks some documentation. Besides this, for developers to write IAC, a deep understanding of infrastructure is a must.

I hope that this post has given you a better idea of the overall platform, approach, and unique strengths.

Happy Puluming 🙂