Sound Familiar?

It’s not a sentiment you would expect from most IT decision-makers. However, it’s something we hear from an increasing number of organizations.

The benefits of a well-thought-out cloud transformation roadmap are not lost on them.

  • They know that, in an ideal world, they ought to start with an in-depth assessment of their application portfolio, in line with the best practice – “migrate your capabilities, not apps or VMs”.
  • They also realize the need to develop a robust cloud governance model upfront.
  • And ultimately, they understand the need to undertake an iterative migration process that takes into account “organizational change management” best practices.

At the same time, these decision-makers face real challenges with their existing IT infrastructure that simply cannot wait months and years for a successful cloud transformation to take shape. They can’t get out of their on-premises data centers soon enough. This notion isn’t limited to organizations with fast-approaching Data Center (DC) lease renewal deadlines or end of support products, either.

So, how do we balance the two competing objectives:

  • Immediate need to move out of the DC
  • Carefully crafted long-term cloud transformation

A Two-Step Approach to Your Cloud Transformation Journey

From our experience with a broad range of current situations, goals, and challenges, we recommend a two-step cloud transformation approach that addresses both your immediate challenges and the organization’s long-term vision for cloud transformation.

  1. Tactical “Lift-n-Shift” to the Cloud – As the name suggests, move the current DC footprint as is (VMs, databases, storage network. etc.) to Azure
  2. Strategic Cloud Transformation – Once operational in the cloud, incrementally and opportunistically move parts of your application portfolio to higher-order Azure PaaS/cloud-native services

Tactical “Lift-n-Shift” to the Cloud

Lift n Shift Approach to Cloud Transformation

On the surface, step #1 above may appear wasteful. After all, we are duplicating your current footprint in Azure. But keep in mind that step #1 is designed for completion in days or weeks, not months or years. As a result, the duplication is minimized. At the same time, step #1 immediately puts you in a position to leverage Azure capabilities, giving you tangible benefits with minimal to no changes to your existing footprint.

Here are a few examples of benefits:

  • Improve the security posture – Once you are in Azure, you tap into security capabilities such as intrusion detection and denial of service attack solely by being in Azure. Notice that I deliberately did not cite Security Information and Event Management (SIEM) tools like Azure Sentinel. Technically you can take advantage of Azure Sentinel for on-premises workloads.
  • Replace aging hardware – Your hardware may be getting old but isn’t old enough for a Capex-powered refresh. Moving your VMs to Azure decouples you from the underlying hardware. “But won’t that be expensive, since you are now paying by usage per minute?” you ask. Not necessarily and certainly not in the long run. Consider options like Reserved Instance (RI) pricing that can offer an up to 80% discount based on a one- or three-year commitment.

Furthermore, you can combine RI with Azure Hybrid Benefits (AHUB) which provides discounts for licenses already owned. Finally, don’t forget to take into account the savings from decreased needs for power, networks, real estate, and the cost of resources to manage all the on-premises assets. Even if you can’t get out of the DC lease completely, you may be able to negotiate a modular reduction of your DC footprint. Please refer to Gartner research that suggests that over time, the cloud can become cost-effective.

AMP Move out of Data Center

Source – https://blogs.gartner.com/marco-meinardi/2018/11/30/public-cloud-cheaper-than-running-your-data-center/

  • Disaster Recovery (DR) – Few organizations have a DR plan setup that is conducive for ongoing DR tests. Having an effective DR plan is one of the most critical responsibilities of IT. Once again, since geo-replication is innate to Azure, your disks are replicated to an Azure region that is at least 400 miles away, by default. Given this, DR is almost out-of-the-box.
  • Extended lease of life on out of support software – If you are running an Operating System (OS), such as Windows Server 2008 or SQL Server 2008, moving to Azure extends the security updates for up to three years from the “end of support” date.
  • Getting out of the business of “baby-sitting” database servers – Azure managed instances offer you the ability to take your existing on-premises SQL Server databases and move them to Azure with minimal downtime. Once your database is an Azure SQL Managed Instance, you don’t have to worry about patching and backup, thereby significantly reducing the cost of ownership.
  • Take baby steps towards automation and self-service – Self-service is one of the key focus areas for most IT organizations. Once again, since every aspect of Azure is API driven, organizations can take baby steps towards automated provisioning.
  • Get closer to a data lake – I am sure you have heard the quote “AI is the new electricity”. We also know that Artificial Intelligence (AI) needs lots and lots of data to train the Machine Learning (ML) algorithms. By moving to Azure, it is that much easier to capture the “data exhaust” coming out the applications in a service like Azure Data Lake. In turn, Azure Data Lake can help turn this data into intelligence.

Strategic Cloud Transformation

Strategic Cloud Transformation

Once you have completed step #1 by moving your on-premises assets to the cloud, you are now in a position to undertake continuous modernization efforts aligned to your business priorities.

Common approaches include:

  • Revise – Capture application and application tiers “as-is” in containers and run on a managed orchestrator like Azure Kubernetes Service. This approach requires minimal changes to the existing codebase. For more details of this approach, including a demo, read Migrate and Modernize with Kubernetes on Azure Government.
  • Refactor – Modernize by re-architecting to target Platform as a Service (PaaS) and “serverless” technologies. This approach requires more significant recoding to target PaaS services but allows you to take advantage of cloud provider managed services. For more information, check out our “Full PaaS” Approach to Modernizing Legacy Apps.
  • Rebuild – Complete rewrite of the applications using cloud-native technologies like Kubernetes, Envoy, and Istio. Read our blog, What Are Cloud-Native Technologies & How Are They Different from Traditional PaaS Offerings, for more information.
  • Replace – Substitute an existing application, in its entirety, with Software as a Service (SaaS) or an equivalent application developed using a no-code/low-code platform.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

The following table summarizes the various approaches for modernization in terms of factors such as code changes, operational costs, and DevOps maturity.

Compare App Modernization Approaches

Azure Migration Program (AMP)

Microsoft squarely aligns with this two-step approach. At the recent Microsoft partner conference #MSInspire, Julia White announced AMP (Azure Migration Program).

AMP brings together the following:

Wrapping Up

A two-step migration offers a programmatic approach to unlock the potential of the cloud quickly. You’ll experience immediate gains from a tactical move to the cloud and long-term benefits from a strategic cloud transformation that follows. Microsoft programs like AMP, combined with over 200+ Azure services, make this approach viable. If you’re interested in learning more about how you can get started with AMP, and which migration approach makes the most sense for your business goals, reach out to AIS today.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO TRANSFORMATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.

In this episode of the Azure Government video series, Steve Michelotti sits down with AIS’ very own Vishwas Lele to discuss migrating and modernizing with Kubernetes on Azure Government. You’ll learn about the traditional approaches for migrating workloads to the cloud, including:

1. Rehost
2. Refactor
3. Reimagine

You will also learn how Kubernetes provides an opportunity to fundamentally rethink these traditional approaches to cloud migration by leveraging Kubernetes in order to get the “best of all worlds” in the migration journey. If you’re looking to migrate your existing legacy workloads to the cloud, while minimizing code changes and taking advantage of innovative cloud-native technologies, this is the video you should watch!

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

I just returned from Microsoft BUILD 2019 where I presented a session on Azure Kubernetes Services (AKS) and Cosmos. Thanks to everyone who attended. We had excellent attendance – the room was full! I like to think that the audience was there for the speaker 😊 but I’m sure the audience interest is a clear reflection of how popular AKS and Cosmos DB are becoming.

For those looking for a 2-minute overview, here it is:

In a nutshell, the focus was to discuss the combining Cloud-Native Service (like AKS) and a Managed Database

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

We started with a discussion of Cloud-Native Apps, along with a quick introduction to AKS and Cosmos. We quickly transitioned into stateful app considerations and talked about new stateful capabilities in Kubernetes including PV, PVC, Stateful Sets, CSI, and Operators. While these capabilities represent significant progress, they don’t match up with external services like Cosmos DB.

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Cloud Native Tooling

One option is to use Open Service Broker – It allows Kubernetes hosted services to talk to external services using cloud-native tooling like svcat (Service Catalog).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck svcat

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck SRE

External services like Cosmos DB can go beyond cluster SRE and offer “turn-key” SRE in essence – Specifically, geo-replication, API-based scaling, and even multi-master writes (eliminating the need to failover).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Mutli Master Support

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Configure Regions

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Portability

Since the Open Service Broker is an open specification, your app remains mostly portable even when you move to one cloud provider to another. OpenService Broker does not deal with syntactic differences, say connection string prefix difference between cloud providers.  One way to handle these differences is to use Helm.

Learn more about my BUILD session:

Here you can find the complete recording of the session and slide deck: https://mybuild.techcommunity.microsoft.com/sessions/77138?source=sessions#top-anchor

Additionally, you can find the code for the sample I used here: https://github.com/vlele/build2019 

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

Kubernetes logoI recently built a machine learning model, trained it, and explored the implications of deploying it using KubernetesMachine learning trains a program to recognize patterns in data so when new data is provided, it can make predictions based on what it’s learned. Kubernetes is a container orchestrator that automates the deploying and scaling of containers. Packaging a machine learning program and deploying it on Kubernetes has the potential to help our customers with their increasingly complex machine learning needs.

First, I determined what I wanted to accomplish with machine learning, the tools I needed, and how to put them together.  Teaching a machine how to recognize imagery is fascinating to me, so I focused on image recognition. I found the PlanesNet dataset on Kaggle — a dataset focused on recognizing planes in aerial imagery.  The dataset would be simple enough to implement but have good potential for further exploration.

To build a proof of concept I used TensorFlow, TFLearn, and Docker.  TensorFlow is an open-source machine learning library and TFLearn is a higher-level TensorFlow API which aids in writing less code to get started.  TFLearn is a Python library, so I wrote my proof of concept in Python.  Docker was used to package the program into a container to run on Kubernetes.

Machine Learning

planes data set

The PlanesNet dataset has 32,000 color images, each at 20px by 20px.  There are 8,000 images classified as “plane” and 24,000 images classified as “no-plane.” Reading in the dataset and preparing it for processing was straightforward using Python libraries like Pillow and NumPy.  Then I broke the dataset up into a training set and a testing set using the train_test_split function in the sklearn Python library.  The training set was used to build the model weights while the testing set validated how well the model was trained.

Neural Network

One of the most complex parts was how to design my simple neural network. A neural network is broken into layers starting with the input layer, then one or more hidden layers, ending with the output layer.

Creating the input layer, I defined its shape, preprocessing, and augmentation. The shape of the PlanesNet data is a 20px by 20px by 3 colors matrix. The preprocessing allowed me to tweak preparing the data for both training and testing while the augmentation allowed me to perform operations (like flipping or rotating images) on the data while training.

Because I was working with imagery, I chose to use two convolutional hidden layers before arriving at the output layer. Convolutional layers perform a set of calculations on each pixel and its surrounding pixels, attempting to understand features (i.e., lines, curves) of an image. After my hidden layers, I introduced a dropout percentage rate, which helps to reduce overfitting.

Next, I added a readout layer which brought my neural network down to the number of expected outputs.  Finally, I added a regression layer to specify a loss function, optimizer, and learning rate.  With this network, I could now train it over multiple iterations, called epochs, until I reached the desired level of prediction accuracy.

diagram of a simple neural network

Simple Neural Network

Using this model, I could now predict if an image was of an airplane. I loaded the image, called the predict function and viewed the output.  The output gave me a percentage of likelihood that the image was an airplane.

Exploring Scaling

Now that I had my simple neural network for training and predicting, I packaged it into a Docker container so it could be run on more than my single computer.  As I examined the details of deploying to Kubernetes, two things quickly became apparent.  First, my simple neural network would not train across multiple container instances.  Second, my prediction program would scale well if I wrapped it in a web API.

For my simple neural network, I had not implemented anything to split a dataset, train multiple containers, and bring the results back together.  Therefore running multiple instances of my container using Kubernetes would only provide the benefit of choosing the container with the highest accuracy model. (The dataset splitting process along with the input augmentation causes each container to have different accuracies.)  To have multiple containers which coordinate learning in my simple neural network would require further design.

diagram of the container model

My container’s prediction program executes via the command line but wrapping it in a web API endpoint it would make it easier to use.  Since each instance of the container has the trained model, Kubernetes could scale up or down the number of running instances of my container to meet the demand of the web API endpoint.  Kubernetes also provides a method for rolling out updates to my container if I further train my network model.

Conclusion

This was an excellent exercise in building a machine learning model, training it to predict an airplane from aerial imagery, and deploying it on Kubernetes. Additional applications could include expanding the dataset to include different angles of planes along with recognizing various specific types of planes. It would also be beneficial to re-design the neural network to benefit from Kubernetes scaling to the training needs. Using Kubernetes to deploy a prediction API-based on a trained model is both beneficial and practical today.

Kubernetes logoIf you’ve worked in software development or IT for any amount of time, chances are you’ve at least heard about containers…and maybe even Kubernetes.

Maybe you’ve heard that Google manages to spin up two billion containers a week to support their various services, or that Netflix runs its streaming, recommendation, and content systems on a container orchestration platform called Titus.

This is all very exciting stuff, but I’m more excited to write and talk about these things now more than ever before, for one simple reason: We are finally at a point where these technologies can make our lives as developers and IT professionals easier!

And even better…you no longer have to be a Google (or one of the other giants) employee to have a practical opportunity to use them.

Containers

Before getting into orchestrators and what they actually offer, let’s briefly discuss the fundamental piece of technology that all of this is depends on – the container itself.

A container is a digital package of sorts, and it includes everything needed to run a piece of software.  By “everything,” I mean the application code, any required configuration settings, and the system tools that are normally brought to the table by a computer’s operating system. With those three pieces, you have a digital package that can run a software application in isolation across different computing platforms because the dependencies are included in that package.

And there is one more feature that makes containers really useful – the ability to snapshot the state of a container at any point. This snapshot is called a container “image.” Think of it in the same way you would normally think of a virtual machine image, except that many of the complexities of capturing the current state of a full-blown machine image (state of the OS, consistency of attached disks at the time of the snapshot, etc.) are not present in this snapshot.  Only the components needed to run the software are present, so one or a million instances can be spun-up directly from that image, and they should not interfere with each other.  These “instances” are the actual running containers.

So why is that important? Well, we’ve just alluded to one reason: Containers can run software across different operating systems (various Linux distributions, Windows, Mac OS, etc.).  You can build a package once and run it in many different places. It should seem pretty obvious at this point, but in this way, containers are a great mechanism for application packaging and deployment.

To build on this point, containers are also a great way to distribute your packages as a developer.  I can build my application on my development machine, create a container image that includes the application and everything it needs to run, and push that image to a remote location (typically called a container registry) where it can be downloaded and turned into one or more running instances.

I said that you can package everything your container needs to run successfully, but the last point to make is that the nature of the container package gives you a way to enforce a clear boundary for your application and a way to enforce runtime isolation. This feature is important when you’re running a mix of various applications and tools…and you want to make sure a rogue process built or run by someone else doesn’t interfere with the operation of your application.

Container Orchestrators

So containers came along and provided a bunch of great benefits for me as a developer.  However, what if I start building an application, and then I realize that I need a way to organize and run multiple instances of my container at runtime to address the expected demand?  Or better yet, if I’m building a system comprised of multiple microservices that all need their own container instances running?  Do I have to figure out a way to maintain the desired state of this system that’s really a dynamic collection of container instances?

This is where container orchestration comes in.  A container orchestrator is a tool to help manage how your container instances are created, scaled, managed at runtime, placed on underlying infrastructure, communicate with each other, etc.  The “underlying infrastructure” is a fleet of one or more servers that the orchestrator manages – the cluster.  Ultimately, the orchestrator helps manage the complexity of taking your container-based, in-development applications to a more robust platform.

Typically, interaction with an orchestrator occurs through a well-defined API, and the orchestrator takes up the tasks of creating, deploying, and networking your container instances – exactly as you’ve specified in your API calls across any container host (servers included in the cluster).

Using these fundamental components, orchestrators provide a unified compute layer on top of a fleet of machines that allows you to decouple your application from these machines. And the best orchestrators go one step further and allow you to specify how your application should be composed, thus taking the responsibility of running the application and maintaining the correct runtime configuration…even when unexpected events occur.

VIEW OUR AZURE CAPABILITIES
Since 2009, AIS has been working with Azure honing our capabilities and offerings. View the overview of our Azure-specific services and offerings.

Kubernetes

Kubernetes is a container orchestrator that delivers the capabilities mentioned above. (The name “Kubernetes” comes from the Greek term for “pilot” or “helmsman of a ship.”) Currently, it is the most popular container orchestrator in the industry.

Kubernetes was originally developed by Google, based in part on the lessons learned from developing their internal cluster management and scheduling system Borg.  In 2014, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) which open-sourced the project to encourage community involvement in its development. The CNCF is a child entity of the Linux Foundation and operates as a “vendor-neutral” governance group. Kubernetes is now consistently in the top ten open source projects based on total contributors.

Many in the industry say that Kubernetes has “won” the mindshare battle for container orchestrators, but what gives Kubernetes such a compelling value proposition?  Well, beyond meeting the capabilities mentioned above regarding what an orchestrator “should” do, the following points also illustrate what makes Kubernetes stand out:

  • The largest ecosystem of self-driven contributors and users of any orchestrator technology facilitated by CNCF, GitHub, etc.
  • Extensive client application platform support, including Go, Python, Java, .NET, Ruby, and many others.
  • The ability to deploy clusters across on-premises or the cloud, including native, managed offerings across the major public cloud providers (AWS, GCP, Azure). In fact, you can use the SAME API with any deployment of Kubernetes!
  • Diverse workload support with extensive community examples – stateless and stateful, batch, analytics, etc.
  • Resiliency – Kubernetes is a loosely-coupled collection of components centered around deploying, maintaining and scaling workloads.
  • Self-healing – Kubernetes works as an engine for resolving state by converging the actual and the desired state of the system.

Kubernetes Architecture

A Kubernetes cluster will always include a “master” and one or more “workers”.  The master is a collection of processes that manage the cluster, and these processes are deployed on a master node or multiple master nodes for High Availability (HA).  Included in these processes are:

  • The API server (Kube-apiserver), a distributed key-store for the persistence of cluster management data (etcd)
  • The core control loops for monitoring existing state and management of desired state (Kube-controller-manager)
  • The core control loops that allow specific cloud platform integration (Cloud-controller-manager)
  • A scheduler component for the deployment of Kubernetes container groups, known as pods (Kube-scheduler)

Worker nodes are responsible for actually running the container instances within the cluster.  In comparison, worker nodes are simpler in that they receive instructions from the master and set out serving up containers.  On the worker node itself, there are three main components installed which make it a worker node in a Kubernetes cluster: an agent called kubelet that identifies the node and communicates with the master, a network proxy for interfacing with the cluster network stack (kube-proxy), and a plug-in interface that allows kubelet to use a variety of container runtimes, called the container runtime interface.

diagram of Kubernetes architecture

Image source

Managed Kubernetes and Azure Kubernetes Service

“Managed Kubernetes” is a deceptively broad term that describes a scenario where a public cloud provider (Microsoft, Amazon, Google, etc.) goes a step beyond simply hosting your Kubernetes clusters in virtual machines to take responsibility for deploying and managing your cluster for you.  Or more accurately, they will manage portions of your cluster for you.  I say “deceptively” broad for this reason – the portions that are “managed” varies by vendor.

The idea is that the cloud provider is:

  1. Experienced at managing infrastructure at scale and can leverage tools and processes most individuals or companies can’t.
  2. Experienced at managing Kubernetes specifically, and can leverage dedicated engineering and support teams.
  3. Can add additional value by providing supporting services on the cloud platform.

In this model, the provider does things like abstracting the need to operate the underlying virtual machines in a cluster, providing automation for actions like scaling a cluster, upgrading to new versions of Kubernetes, etc.

So the advantage for you, as a developer, is that you can focus more of your attention on building the software that will run on top of the cluster, instead of on managing your Kubernetes cluster, patching it, providing HA, etc. Additionally, the provider will often offer complementary services you can leverage like a private container registry service, tools for monitoring your containers in the cluster, etc.

Microsoft Azure offers the Azure Kubernetes Service (AKS), which is Azure’s managed Kubernetes offering. AKS allows full production-grade Kubernetes clusters to be provisioned through the Azure portal or automation scripts (ARM, PowerShell, CLI, or combination).  Key components of the cluster provisioned through the service include:

  • A fully-managed, highly-available Master. There’s no need to run a separate virtual machine(s) for the master component.  The service provides this for you.
  • Automated provisioning of worker nodes – deployed as Virtual Machines in a dedicated Azure resource group.
  • Automated cluster node upgrades (Kubernetes version).
  • Cluster scaling through auto-scale or automation scripts.
  • CNCF certification as a compliant managed Kubernetes service. This means it leverages the Cloud-controller-manager standard discussed above, and its implementation is endorsed by the CNCF.
  • Integration with supporting Azure services including Azure Virtual Networks, Azure Storage, Azure Role-Based Access Control (RBAC), and Azure Container Registry.
  • Integrated logging for apps, nodes, and controllers.

Conclusion

The world of containers continues to evolve, and orchestration is an important consideration when deploying your container-based applications to environments beyond “development.”  While not simple, Kubernetes is a very popular choice for container orchestration and has extremely strong community support.  The evolution of managed Kubernetes makes using this platform more realistic than ever for developers (or businesses) interested in focusing on “shipping” software.

kubernetes logoToday, let’s talk about network isolation and traffic policy within the context of Kubernetes.

Network Policy Specification

Kubernetes’ first-class notion of networking policy allows a customer to determine which pods are allowed to talk to other pods. While these policies are part of Kubernetes’ specification, tools like Calico and Cilium implement these network policies.

Here is a simple example of a network policy:

...
  ingress:
  - from:
    - podSelector:
        matchLabels:
          zone: trusted
  ...

In the above example, only pods with the label zone: trusted are allowed to make an incoming visit to the pod.

egress:
  - action: deny
    destination:
      notSelector: ns == 'gateway’

The above example deals with outgoing traffic. This network policy will ensure that traffic going out is blocked unless the destination is a node with the label ‘gateway’.

As you can see, network policies are important for isolating pods from each other in order to avoid leaking information between applications. However, if you are dealing with data that requires higher trust levels, you may want to consider isolating the applications at the cluster level. The following diagrams depict both logical (network policy based) and physical (isolated) clusters.

Diagram of a Prod Cluster Diagrams of Prod Team Clusters

Network Policy is NOT Traffic Routing…Enter Istio!

Network policies, however, do not allow us to control the flow of traffic on a granular level. For example, let’s assume that we have three versions of a “reviews” service (a service that returns user reviews for a given product). If we want the ability to route the traffic to any of these three versions dynamically, we will need to rely on something else. In this case, let’s use the traffic routing provided by Istio.

Istio is a tool that manages the traffic flow across services using two primary components:

  1. An Envoy proxy (more on Envoy later in the post) distributes traffic based on a set of rules.
  2. The Pilot manages and configures the traffic rules that let you specify how traffic should be routed.

Diagram of Istio Traffic Management

image source

Here is an example of Istio policy that directs all traffic to the V1 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

Here is a Kiali Console view of all “live” traffic being sent to the V1 version of the “reviews” service:

Kiali console screenshot

Now here’s an example of Istio policy that directs all traffic to the V3 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v3

And here is a Kiali Console view of all “live” traffic being sent to the V3 version of the “reviews” service:

Kiali console screenshot v3

Envoy Proxy

Envoy is a lightweight proxy with powerful routing constructs. In the example above, the Envoy proxy is placed as a “sidecar” to our services (product page and reviews) and allows it to handle outbound traffic. Envoy could dynamically route all outbound calls from a product page to the appropriate version of the “reviews” service.

We already know that Istio makes it simple for us to configure the traffic routing policies in one place (via the Pilot). But Istio also makes it simple to inject the Envoy proxy as a sidecar. The following Kubectl command labels the namespace for automatic sidecar injection:

#--> Enable Side Car Injection
kubectl label namespace bookinfo istio-injection=enabled

As you can see each pod has two containers ( service and the Envoy proxy):

# Get all pods 
kubectl get pods --namespace=bookinfo

I hope this blog post helps you think about traffic routing between Kubernetes pods using Istio and Envoy. In future blog posts, we’ll explore the other facets of a “service mesh” – a common substrate for managing a large number of services, with traffic routing being just one facet of a service mesh.