Lift n Shift Approach to Cloud Transformation

What does it mean to Rehost an application?

Rehosting is an approach to migrating business applications hosted in on-premises data center environments to the cloud by moving the application “as-is,” with little to no changes to the business functions performed by the application. It’s a faster, less resource-intensive migration approach that gets your apps into the cloud without much code modification. It is often a good first step to cloud transformation.

Organizations with applications that were initially developed for an on-premises environment commonly look to rehosting to take advantage of cloud computing benefits. These benefits may include increased availability and networking speeds, reduced technical debt, and a pay-per-usage cost structure. When defining your cloud migration strategy, it’s essential to analyze all migration approaches, such as re-platforming, refactoring, replacing, and retiring.

Read More…

SPFx Modern Web Development

SharePoint has been a widely adopted and popular content management system for many large organizations over the past two decades. From SharePoint Portal Server in 2001 to SharePoint Server 2019 and SharePoint Online, the ability to customize the user experience (UX) has evolved dramatically, keeping pace with the evolution of modern web design. SharePoint Framework (SPFx) is a page and web part model that provides full support of client-side SharePoint development with support for open sources. SPFx works in SharePoint Online, and with on-premises SharePoint 2016, and SharePoint 2019. SPFx works both in modern and classic pages. SPFx Web Parts and Extensions are the latest powerful tool we can use to deliver great UX!

Advantages of using SharePoint Framework

1. It Can’t Harm the Farm

Earlier SharePoint (SP) customization code executed on the server from compiled, server-side code is written in a language such as C#. Historically, we created web parts as full trust C# assemblies that were installed on the SharePoint servers and had access to disrupt SharePoint for all users. Because it ran with far greater permissions on the server, it could adversely impact or even crash the entire farm. Microsoft tried to solve this problem in SP 2010 by implementing Sandbox solutions, followed by the App Model, now known as Add-In model.

SPFx development is based on JavaScript running in a browser, making REST API calls to the SharePoint and Office 365 back-end workloads and does not touch the internals of SharePoint.

The SharePoint Framework is a safer, lower-risk model for SharePoint development.

2. Modern Development Tools

Building SPFx elements using JavaScript and its wealth of libraries, the UX and UI can be shaped as beautifully as any modern website. The JavaScript is embedded directly to the page, and the controls are rendered in the normal page DOM.

SharePoint Framework development is JavaScript framework-agnostic. The toolchain is based on common open-source client development tools such as npm, TypeScript, Yeoman, webpack, and gulp. It supports open-source JavaScript libraries such as Node.js, React.js, Angular.js, Handlebars, Knockout, and more. These provide a lightweight and rapid user experience.

3. Mobile-First Design

“Mobile first”, as the name suggests, means that we start the product design from the mobile end, which has more restrictions to make the content usable in the small space of a phone. Next, we can expand those features to a more luxurious space to create a tablet or desktop version.

Because SharePoint Framework customizations run in the context of the current page (and not in an IFRAME), they are responsive, lightweight, accessible, and mobile-friendly. Mobile support is built-in from the start. Content reflows across device sizes and pages are fast and fluid.

4. Simplified Deployment

There is some work to do at the beginning of a new project to set up the SPFx structure to support reading from a remote host. An App Catalog must be created, as well as generating and uploading a manifest file. If the hosted content is connected with a CDN (Content Delivery Network), that will also require setup. However, once those structural pieces are in place, deployment is simplified to updating files on the host location. It does not require traditional code deployments of server-side code, with its attendant restrictions and security review lead time.

5. Easier Integration of External Data Sources

With SPFx, calls to data from external sources may be easier since it’s web content hosted outside of SharePoint.

SPFx Constraints and Disadvantages

The SharePoint Framework is only available in SharePoint Online,  on-premises SharePoint 2016, and SharePoint 2019 at the time of this blog. SPFx cannot be added to earlier versions of SharePoint such as SharePoint 2013 and 2010.

SharePoint Framework Extensions cannot be used in on-premises SharePoint 2016 but only in SharePoint 2019 and SharePoint Online.

SPFx, like any other client-side implementation, runs in the context of the logged-in user. The permissions cannot be elevated to impersonate as an admin user like in farm solutions, CSOM (client-side object model) context, or in SharePoint Add-ins and Office 365 web applications. The SharePoint application functionality is limited to the current user’s permission level, and customization is based on that as well. To overcome this constraint, a hybrid solution implementation with SPFx to communicate with Application Programming Interfaces (APIs). APIs would be registered as SharePoint add-in, that uses the app-only context to communicate with SharePoint. For this communication between SPFx and API to work, the API would need to support CORS (Cross-Origin Resource Sharing) as the communication would be through cross-domain client-side calls.

SPFx is also not for long-running operations as it is entirely client-side implementation. The web request cannot wait longer until it gets the response from the long-running web operation. Hence for those processes, the hybrid approach with long operations can be implemented in Azure web job/function, and SPFx can get the update from those via webhook.

Developers coming from server-side will have a learning curve with entirely client-side development. But TypeScript is there for the rescue.

SPFx Comparison to Other Technologies and Models

SharePoint lists come in handy for many organizations when entering data, but customers always ask for the ability to display the data in some reporting format, such as a dashboard. Below we compare the different ways we can accomplish this and why SPFx is a good fit:

  • Classic Web Part Pages: If we do not want to use the SharePoint Framework, SharePoint 2019 still supports the classic web part pages. You can add content editor web parts and deploy any custom JavaScript/jQuery code. However, with this approach, uploading the Js files in the SP library and manually adding pages in a library become cumbersome. We may end up writing custom JSOM (JavaScript object model) code to make the deployment easier. Microsoft does not recommend this approach, and there is the possibility that this will no longer be supported in the future. Also, with this approach, if you want to render any custom tables, you need to write custom code or use a third-party table. Using SharePoint Framework, we can easily use Office UI Fabric React components like Details list.
  • Custom App: We can design custom applications to deploy in the cloud, which can read the data from SharePoint. The challenge is that each customer environment is different. It’s not always easy to connect to SharePoint from the cloud in a production environment, especially with CAC (Common Access Card) authenticated sites.
  • PowerApps/LogicApps: With newer technologies such as PowerApps, Logic Apps, and Flow, we can design custom SharePoint Forms and business logic and connect to SharePoint using the SharePoint connector. In a production environment, it is not easy to get connection approved and to connect with on-premises data. PowerApps and Flow require the purchase of licenses.

Using SPFx, we can quickly design the dashboards using Office UI Fabric components. For deployment, we do not need to write any custom utility code, SharePoint framework package can create the lists and libraries as well.

Wrapping Up

We hope this blog provided an SPFx overview and its great functionalities. Please look forward to our next blog post (Part II) in developing and deploying custom SPFx Web Parts, Extensions, and connecting to API’s/Azure in SharePoint Online and SharePoint 2019!

Additional Links to get started in SPFx

Exchange 2010 is at the end of its journey, and what a long road it’s been! For many customers, it has been a workhorse product facilitating excellent communications with their employees. It’s sad to see the product go, but it’s time to look to the future, and the future is in the Microsoft Cloud with Exchange Online

What does End of Support mean for my organization?

email security exchange 2010 end of support risks

While Exchange 2010 isn’t necessarily vanishing from the messaging ecosystem, support of the product ends in all official capacities on January 14, 2020. Additionally, Office 2010 will be hitting the end of support on October 13, 2020, which means your old desktop clients will also be unsupported within the same year. What this means is that businesses using Exchange and Office applications will be left without support from Microsoft – paid or free. End of support also means the end of monthly security updates. Without regular security updates and patches from Microsoft to protect your environment, your company is at risk.

  1. Security risks – Malware protection and attack surface protection become more challenging as products are off lifecycle support. Any new vulnerability may not be disclosed or remediated.
  2. Compliance risks – As time goes on, organizations must adhere to new compliance requirements – for example, GDPR was a massive recent deadline. While managing these requirements on-premises is possible, it is often challenging and time-consuming. Office 365 offers improved compliance features for legal and regulatory requirements. The most notable is that Microsoft cloud environments comply with most regulatory needs, including HIPAA, FISMA, FedRAMP, and more.
  3. Lack of software and hardware support – Lack of technical support for problems that may occur such as bug fixes, stability and usability of the server, and time zone updates. Dropped support for interoperation with 3rd party vendors like MDM and message hygiene solutions can mean your end-user access can stop working. Not to mention the desktop and mobile mail solutions already deployed, or perhaps being upgraded, around this now decade-old infrastructure.
  4. Speaking of old infrastructure – This isn’t just about applications and services. For continued support and to meet compliance requirements, you must migrate to newer hardware to retain, store, and protect your mailbox and associated data. Office 365 absolves you of all infrastructure storage costs. That is a perfect opportunity, and often a justification in and of itself, to move to the cloud.

It’s time to migrate to Office 365 . . . Quickly!

There are many great reasons to move your exchange environment to a hosted environment in Office 365. The biggest one being your company will no longer have to worry about infrastructure costs.

Here are some other significant advantages that you won’t have to worry about:

  • Purchasing, and maintaining expensive storage and hardware infrastructure
  • Time spent keeping up to date on product, security, and time zone fixes
  • Time spent on security patching OS or updating firmware
  • Cost for licensing OS or Exchange Servers
  • Upgrading to a new version of exchange; you’re always on the latest version of Exchange in Office 365
  • Maintaining compliance and regulations for your infrastructure whether Industry, Regional or Government
  • With an environment of thousands of users and potentially unlimited mailboxes, absolving your admins from the day-to-day database, storage, server, and failover management is a huge relief and cost savings by having your team focus on Exchange administration.

Another big cost for on-premises is storage: data repositories for mailbox data retention, archiving, and journaling. This value cannot be overstated – do you have large mailboxes or archive mailboxes? Are you paying for an archiving or eDiscovery solution? If you consider Exchange Online Plan 2 licenses (often bundled into larger enterprise licenses such as E3 or E5) they allow for archive mailboxes of unlimited size. These licenses also offer eDiscovery and compliance options that meet the needs of complex organizations.

The value of the integrated cloud-based security and compliance resources in the Office 365 environment is immense. Many of our customers have abandoned their entire existing MDM solutions in favor of Intune. Data Loss Prevention allows you to protect your company data against exfiltration. Office 365 Advanced Threat Protection fortifies your environment against phishing attacks and offers zero-day attachment reviews. These technologies are just the tip of the iceberg and can either replace or augment an existing malware and hygiene strategy. And all these solutions specifically relate and interoperate with Exchange Online.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

Some other technologies that seamlessly work with Exchange Online and offer integral protections to that product as well as other Microsoft cloud and SaaS solutions:

  • Conditional Access – Precise, granular access control to applications
  • Intune – Device and application management and protection
  • Azure AD Identity Protection – Manage risk levels for associate activity
  • Azure Information Protection – Classify and protect documents
  • Identity Governance – Lifecycle management for access to groups, roles, and applications

Think outside the datacenter

data center transformation services

There are advantages to thinking outside the (mail)box when considering an Exchange migration strategy to the cloud. Office 365 offers an incredible suite of interoperability tools to meet most workflows. So while we can partner with you on the journey to Office 365, don’t overlook some of the key tools that are also available in the Microsoft arsenal. These include OneDrive for Business, SharePoint Online, and Microsoft Teams; all of which could be potential next steps in your SaaS journey! Each tool is a game-changer in their own right, and each will bring incredible collaboration value to your associates.

AIS has helped many customers migrate large and complex on-premise environments to Office 365.

Whether you need to:

  • Quickly migrate Exchange to Exchange Online for End of Support
  • Move File services to OneDrive and SharePoint Online for your Personal drives/Enterprise Shares/Cloud File Services
  • Adopt Microsoft Teams from Slack/HipChat/Cisco Teams
  • Migrate large and complex SharePoint farm environments to SharePoint Online

Whatever it is, we’ve got you covered.

What to do next?

Modern Workplace Assessment for Exchange 2010

Take action right now, and start a conversation with AIS today. Our experts will analyze the current state and rollout migration of your organization to Office 365 quickly and seamlessly.

To accelerate your migration to Office 365, let us provide you a free Modern Workplace assessment to evaluate comprehensively:

  • Organization readiness for adoption of Office 365 (Exchange and desktop-focused)
  • Desktop-focused insights and opportunities to leverage Microsoft 365 services
  • Develop total cost of ownership (TCO) for migrating Exchange users to Office 365 including fees for licensing
  • Migration plans cover detailed insights and approaches for service migrations such as…
    • Exchange to Exchange Online
    • File servers, personal shares, and Enterprise shares to OneDrive for Business and SharePoint Online
    • Slack / HipChat / Cisco Teams to Microsoft Teams
    • SharePoint Server to SharePoint Online

GET AN ASSESSMENT OF YOUR EXCHANGE 2010 ENVIRONMENT

Wrapping Up

Migrating your email to Office 365 is your best and simplest option to help you retire your Exchange 2010 deployment. With a migration to Office 365, you can make a single hop from old technology to state-of-the-art features.

AIS has the experience and expertise to evaluate and migrate your on-premise Exchange and collaboration environments to the cloud. Let us focus on the business of migrating your on-premises applications to Office 365, so you can focus on the business of running your business. This is the beginning of a journey, but something AIS is familiar and comfortable guiding you to seamless and successful cloud migration. If you’re interested in learning more about our free modern workplace assessment or getting started with your Exchange migration, reach out to AIS today.

NOT SURE WHERE TO START? REACH OUT TO AIS TO START THE CONVERSATION.

To benefit the most from this post, you should understand the following concepts:

  • Dynamics CRM Workflows
  • DocuSign Integration with Dynamics CRM
  • DocuSign merge fields and merge-back

The Problem

I recently experimented with integrating DocuSign with Dynamics 365 — specifically, the merging of data into a DocuSign form and then writing the data back into Dynamics. After reading the DocuSign for Dynamics 365 CRM – 6.1 documentation I found that DocuSign Drop Downs and Radio Button controls are not supported for Dynamics merging and write backs. I started work on a solution that would use a Checkbox field in DocuSign and a Two Options field in Dynamics. I had all my text fields working correctly and assumed it would be straightforward as there were both Boolean fields.

I was disappointed to find out that the solution would not merge. After researching online and trying a few suggestions, I finally decided to add a temporary text field to my DocuSign form and see what Dynamics was putting into it, and found that the value was “Yes.” Then I looked at the form data in DocuSign…and it had the value “X.” I tried replacing the values for “Yes” and “No” in the Dynamics Two Options field with “X” and “O”, but that didn’t work either.

The Solution

I finally decided to change the “Yes” and “No” values to “true” and “false.”

This time, when the data was merged, the checkbox was checked!

And once the client receives the email, fills out the form, and the .pdf files are sent…this is when the ‘X’ we saw in the form data is used:

Finally, I verified it worked end-to-end by unchecking the box in Dynamics and saving the record:

After firing off the workflow to merge data in DocuSign form, the box is unchecked now:

Send the email off to be filled, check the box and add a new allergy:

Now, wait for the Dynamics envelope status workflow to complete. Check the record and the record will update successfully in Dynamics.

Conclusion

Albeit small, I’m surprised I didn’t find it documented. So if you’ve come across this issue working with DocuSign Checkbox fields and Dynamics 365, I hope this post saves you some time!

ServiceNow Logo
ServiceNow is
a sophisticated information technology tool and one that can be easily underestimated.   

Looking back, I initially thought ServiceNow was limited to a helpdesk front-end intake tool.  

But lately, I’ve realized that while becoming the leading request intake tool, ServiceNow must have also recognized its advantage in offering additional capabilities sought by organizations of all sizes. Perhaps ServiceNow came to realizthat aggregations of problem-related requests may be viewed from a different perspective: As an excellent barometer for latent pain-points. This prompted ServiceNow to pursue larger goals.  

Inside/Out

Delving into ServiceNow permitted me to easily write code within SharePoint that reaches into ServiceNow to obtain or modify data in its tables. Additionally, I was also able to write code that let me reach outward from inside ServiceNow and obtain or modify data that resides in SharePoint.  

This two-way data traffic was easily accomplished through the magic of REST and the appropriate sprinkle of user authentication. Any environment enabled with REST extensions would work just as well.    

So ServiceNow is highly “integrate-able,” as we can easily extend outward from inside it, as well as extend inward from virtually any environment. Add in the fact that ServiceNow is a Platform as a Service (PAAS), and my eyes start widening with a feeling that I’ve stumbled upon an extensible business-support tool that might be the long-awaited game changer regarding how transactional work is organized, managed, and used to determine worthwhile projects, driven by demonstrated and specific business needs.      

What’s Next

My expedition into the ServiceNow platform is still fairly new but it makes me stand back when I hear that some government organization leaders are considering replacing certain (perhaps many) SharePoint uses with the evolving tools offered by ServiceNow. Many large organizations have already adopted it, and the company has experienced extraordinary growth from its birth in 2003.  In just 16 years, ServiceNow has grown to 6,000 employees worldwide with revenue of over $2.6 billion.    

My experience using ServiceNow from the perspective of a full administrator has been extremely positive. As an administrator, I’ve been able to access anything I need through the ease of highly-simplified navigation.  The vertical navigation segment lets me expand by clicking on high-level items. Alternatively, the navigator allows me to filter via a field that eliminates anything that doesn’t match the string I’m keying and surfaces items from the hierarchy for anything that does match. The result is a stunningly quick way to find any table, workflow, or tool built into ServiceNow for administrators. In fact, working as an administrator has been much more intuitive than entering a ServiceNow ticket for any specific request.  

 

Kubernetes logoI recently built a machine learning model, trained it, and explored the implications of deploying it using KubernetesMachine learning trains a program to recognize patterns in data so when new data is provided, it can make predictions based on what it’s learned. Kubernetes is a container orchestrator that automates the deploying and scaling of containers. Packaging a machine learning program and deploying it on Kubernetes has the potential to help our customers with their increasingly complex machine learning needs.

First, I determined what I wanted to accomplish with machine learning, the tools I needed, and how to put them together.  Teaching a machine how to recognize imagery is fascinating to me, so I focused on image recognition. I found the PlanesNet dataset on Kaggle — a dataset focused on recognizing planes in aerial imagery.  The dataset would be simple enough to implement but have good potential for further exploration.

To build a proof of concept I used TensorFlow, TFLearn, and Docker.  TensorFlow is an open-source machine learning library and TFLearn is a higher-level TensorFlow API which aids in writing less code to get started.  TFLearn is a Python library, so I wrote my proof of concept in Python.  Docker was used to package the program into a container to run on Kubernetes.

Machine Learning

planes data set

The PlanesNet dataset has 32,000 color images, each at 20px by 20px.  There are 8,000 images classified as “plane” and 24,000 images classified as “no-plane.” Reading in the dataset and preparing it for processing was straightforward using Python libraries like Pillow and NumPy.  Then I broke the dataset up into a training set and a testing set using the train_test_split function in the sklearn Python library.  The training set was used to build the model weights while the testing set validated how well the model was trained.

Neural Network

One of the most complex parts was how to design my simple neural network. A neural network is broken into layers starting with the input layer, then one or more hidden layers, ending with the output layer.

Creating the input layer, I defined its shape, preprocessing, and augmentation. The shape of the PlanesNet data is a 20px by 20px by 3 colors matrix. The preprocessing allowed me to tweak preparing the data for both training and testing while the augmentation allowed me to perform operations (like flipping or rotating images) on the data while training.

Because I was working with imagery, I chose to use two convolutional hidden layers before arriving at the output layer. Convolutional layers perform a set of calculations on each pixel and its surrounding pixels, attempting to understand features (i.e., lines, curves) of an image. After my hidden layers, I introduced a dropout percentage rate, which helps to reduce overfitting.

Next, I added a readout layer which brought my neural network down to the number of expected outputs.  Finally, I added a regression layer to specify a loss function, optimizer, and learning rate.  With this network, I could now train it over multiple iterations, called epochs, until I reached the desired level of prediction accuracy.

diagram of a simple neural network

Simple Neural Network

Using this model, I could now predict if an image was of an airplane. I loaded the image, called the predict function and viewed the output.  The output gave me a percentage of likelihood that the image was an airplane.

Exploring Scaling

Now that I had my simple neural network for training and predicting, I packaged it into a Docker container so it could be run on more than my single computer.  As I examined the details of deploying to Kubernetes, two things quickly became apparent.  First, my simple neural network would not train across multiple container instances.  Second, my prediction program would scale well if I wrapped it in a web API.

For my simple neural network, I had not implemented anything to split a dataset, train multiple containers, and bring the results back together.  Therefore running multiple instances of my container using Kubernetes would only provide the benefit of choosing the container with the highest accuracy model. (The dataset splitting process along with the input augmentation causes each container to have different accuracies.)  To have multiple containers which coordinate learning in my simple neural network would require further design.

diagram of the container model

My container’s prediction program executes via the command line but wrapping it in a web API endpoint it would make it easier to use.  Since each instance of the container has the trained model, Kubernetes could scale up or down the number of running instances of my container to meet the demand of the web API endpoint.  Kubernetes also provides a method for rolling out updates to my container if I further train my network model.

Conclusion

This was an excellent exercise in building a machine learning model, training it to predict an airplane from aerial imagery, and deploying it on Kubernetes. Additional applications could include expanding the dataset to include different angles of planes along with recognizing various specific types of planes. It would also be beneficial to re-design the neural network to benefit from Kubernetes scaling to the training needs. Using Kubernetes to deploy a prediction API-based on a trained model is both beneficial and practical today.

PowerApps logoI’m working on a project with a straightforward requirement that’s typically solved with out-of-the-box PowerApps platform features. The customer needs to capture employment application data as part of their online hiring process. In addition to entering standard employment data, applicants enter a typical month’s worth of budgeted expenses to paint a clear financial picture which serves as one factor in their suitability for employment.

The Requirements

The solution consists of an Application entity and a related Expense entity to track multiple expenses per Application. The Application entity needs to capture aggregate expense values based on expense categories. The Expense entity stores the institution or business receiving the payment, the Expense Type (lookup), the monthly payment in dollars, and the balance owed in dollars. Expense Types are associated with “Expense Categories.” An Expense Category is associated with one or more Expense Types. The Application entity tracks 15-20 aggregate expense values that need to be calculated from the individual expense entries.

For example, the expense types “Life Insurance” and “Auto Insurance” are both associated with the expense category “Insurance.” The Application entity has an aggregate field called “Total Insurance Expenses” to capture the sum of all monthly insurance-related expenses.

The basic entity model is shown below:

Diagram of entity model

To summarize all that detail (which is probably hard to follow in paragraph form), the essential requirements are:

  • Capture standard employment data for applicants
  • Capture monthly budgeted expenses associated with an expense type and category
  • Calculate 15-20 aggregate expenses based on expense category
  • The solution must accommodate the addition of new expense categories

The aggregate fields on the Application entity fall into one of four categories: 1) a monthly payment total by expense category, 2) an outstanding balance total by expense category, 3) a monthly payment total across all categories, and 4) a total outstanding balance across all categories.

The breakdown for each of the expense categories and their associated aggregations on the Application entity can be depicted as such:

Expense Category Category Payment Rollup Category Balance Rollup All Payments Rollup All Balances Rollup
Automobile Total Automobile Expenses Total Auto Balances Owed Total Monthly Expenses Total Outstanding Debt
Credit Card Total Credit Card Expenses Total Credit Card Balances Owed Total Monthly Expenses Total Outstanding Debt
Food/Clothing Total Food/Clothing Expenses Total Monthly Expenses
Housing Total Housing Expenses Total Housing Balances Owed Total Monthly Expenses Total Outstanding Debt
Insurance Total Insurance Expenses Total Monthly Expenses
Medical/Dental Total Medical Expenses Total Monthly Expenses
Other Debt Total Other Debt Expenses Total Other Debt Balances Owed Total Monthly Expenses Total Outstanding Debt
Utilities Total Utility Expenses Total Monthly Expenses

The table shows that some expense categories require a total of four aggregate calculations, whereas others only require two aggregate calculations. The calculations should occur when an Expense is created, the values for a monthly payment or balance owed change, an Expense is deactivated/activated (state change), or when an Expense is deleted. “Total Monthly Expenses” is calculated for all expense entries. Only four categories require the category balance and total outstanding balance calculations.

Platform Limitations

Maximum Number of Rollup Fields

Dynamics 365 only allows for a maximum of 10 rollup fields on any given entity — these are fields that take advantage of the “Rollup” field type customized in the solution, the values for which are automatically calculated by the platform on a predetermined interval, e.g., every 12 hours.

One option to overcome this limitation — only available in on-premises Dynamics 365 implementations — is to modify the maximum number of rollup fields per entity in the MSCRM_CONFIG database. There are rare circumstances wherein modifying table values in this database are beneficial. However, given the possibility of a disaster recovery situation in which Dynamics 365 needs to be reinstalled and/or recovered, any modifications made to the MSCRM_CONFIG database could be lost. Even if an organization has well-documented disaster recovery plans that account for these modifications, there’s always a chance the documented procedures will not be followed, or steps possibly skipped.

Another consideration is the potential to move to the cloud. If the customer intends to move their Dynamics 365 application to the cloud, they’ll want to ensure their solution remains on a supported path, and eliminate the need to re-engineer the solution if that day comes.

Rollup Calculation Filters

Rollup fields in Dynamics 365 are indeed a powerful feature, but they do come with limitations that prevent their use in complex circumstances. Rollup fields only support simple filters when defining the rollup field aggregation criteria.

To keep this in the context of our requirements, note above that the Expense Type and Expense Category are lookup values in our solution. If we need to calculate the sum of all credit card expenses entered by an applicant, this is not possible given our current design, because the Expense Type is a lookup value on the expense entry. You’ll notice that when I try to use the Expense Type field in the filter criteria for the rollup field, I’m only given the choices “Does not contain data” and “Contains data.” Not only can’t I use actual values of the Expense Type, but I can’t drill down to the related Expense Category to include it in my aggregation filter.

Screenshot of test rollup screen

Alternatives

The limitations above could be overcome by redesigning our solution, for example, by choosing to configure both the Expense Type and Expense Category fields as Option Sets instead of lookups, along with some sophisticated Business Rules that appropriately set the Expense Category based on the selected Expense Type. That’s one option worth considering, depending on the business requirements with which you’re dealing. We could also choose to develop a code-heavy solution by writing plugin code to do all these calculations, thus side-stepping the limitation on rollup fields and accommodating the entity model I’ve described.

The Solution

Ultimately, however, the customer wants a solution that allows them to update their expense tracking requirements without needing developers to get the job done. For example, the organization may decide they no longer want to track a certain expense category or may want to add a new one. Choosing to create entities to store the necessary lookup values will afford them that kind of flexibility. However, that still leaves us the challenge of calculating the aggregate expense values on the Application entity.

The final solution may still require the involvement of their IT department for some of the configuration steps but ideally will not require code modifications.

Lookup Configuration

The first step toward our solution is to add four additional fields to the Expense Category entity. These four fields represent the four aggregation categories described above: 1) Category Payment Rollup, 2) Category Balance Rollup, 3) Payments Rollup (for all categories), and 4) Balances Rollup (for all categories).

These fields will allow users to define how aggregate values are calculated for each Expense Category, i.e., by identifying the target fields on the Application entity for each aggregation.

Screenshot of Expense Category in Dynmaics 365

Custom Workflow Activity

The next step is to write a custom workflow activity to perform the aggregate calculations described above. Custom workflow activities present several benefits to the customer, primarily centered on the ease of configuration within the Dynamics 365 UI itself  (run asynchronously, run on-demand, and/or when specific events occur on the target record type, e.g., create, update, delete, or state change). Custom workflow activities can accept user-defined parameters configured in the workflow definition.

This means that — as you might have guessed — the custom workflow activity can be written in such a way to allow users to add new Expense Categories so that the aggregate calculations “just work” without requiring code modifications or changes to the workflow configuration in the solution.

Here’s the custom workflow activity class that runs the calculations followed by the workflow definition. As you can see below, I’ve included the Application and Expense Category fields as required input parameters for the workflow activity. (I’ll likely refactor this solution to accept the four fields as inputs, but for now, this gets the job done. Thanks to my good friend, Matt Noel, for that suggestion.) Further down, you’ll notice that for each aggregate field, we run a custom fetch query configured appropriately to perform the required calculation.

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Xrm.Sdk.Workflow;
using System;
using System.Activities;

namespace Project.Workflow.Activities
{
    public class CalculateExpenses : CodeActivity
    {
        #region Input Parameters

        [RequiredArgument]
        [Input("Application")]
        [ReferenceTarget("new_application")]
        public InArgument<EntityReference> Application { get; set; }

        [RequiredArgument]
        [Input("Expense Category")]
        [ReferenceTarget("new_expensecategory")]
        public InArgument<EntityReference> ExpenseCategory { get; set; }

        private readonly string _categoryPaymentRollupField = "new_categorypaymentrollup";
        private readonly string _allPaymentsRollupField = "new_paymentsrollup";
        private readonly string _categoryBalanceRollupField = "new_categorybalancerollup";
        private readonly string _allBalancesRollupField = "new_balancesrollup";

        #endregion

        protected override void Execute(CodeActivityContext executionContext)
        {
            var tracer = executionContext.GetExtension<ITracingService>();
            var context = executionContext.GetExtension<IWorkflowContext>();
            var serviceFactory = executionContext.GetExtension<IOrganizationServiceFactory>();
            var orgService = serviceFactory.CreateOrganizationService(null);
            var orgContext = new OrganizationServiceContext(orgService);

            string[] _rollupFields =
            {
                _categoryPaymentRollupField,
                _allPaymentsRollupField,
                _categoryBalanceRollupField,
                _allBalancesRollupField
            };

            var expenseCategory = GetEntity(orgService, ExpenseCategory.Get(executionContext), _rollupFields);
            var applicationRef = Application.Get(executionContext);
            var application = new Entity("new_application")
            {
                Id = applicationRef.Id
            };

            // Set the Category Payment Rollup
            if (expenseCategory.GetAttributeValue<string>(_categoryPaymentRollupField) != null)
            {
                var paymentRollup = expenseCategory.GetAttributeValue<string>(_categoryPaymentRollupField);
                application[paymentRollup] = new Money(GetExpenseAggregate(orgService, application.Id, expenseCategory.Id, false, true));
            }

            // Set the rollup for all Monthly Payments
            if (expenseCategory.GetAttributeValue<string>(_allPaymentsRollupField) != null)
            {
                var allPaymentsRollup = expenseCategory.GetAttributeValue<string>(_allPaymentsRollupField);
                application[allPaymentsRollup] = new Money(GetExpenseAggregate(orgService, application.Id, expenseCategory.Id, false, false));
            }

            // Set the rollup for Category Balances
            if (expenseCategory.GetAttributeValue<string>(_categoryBalanceRollupField) != null)
            {
                var categoryBalanceRollup = expenseCategory.GetAttributeValue<string>(_categoryBalanceRollupField);
                application[categoryBalanceRollup] = new Money(GetExpenseAggregate(orgService, application.Id, expenseCategory.Id, true, true));
            }

            // Set the rollup for all Category Balances 
            if (expenseCategory.GetAttributeValue<string>(_allBalancesRollupField) != null)
            {
                var allBalancesRollup = expenseCategory.GetAttributeValue<string>(_allBalancesRollupField);
                application[allBalancesRollup] = new Money(GetExpenseAggregate(orgService, application.Id, expenseCategory.Id, true, false));
            }

            // Execute the update on the Application rollup fields 
            try
            {
                //trace 
                orgService.Update(application);
            }
            catch (Exception e)
            {
                //trace
                throw new InvalidPluginExecutionException("Error updating Application: " + e.Message);
            }
        }

        private static Entity GetEntity(IOrganizationService service, EntityReference e, params String[] fields)
        {
            return service.Retrieve(e.LogicalName, e.Id, new ColumnSet(fields));
        }

        private decimal GetExpenseAggregate(IOrganizationService service, Guid applicationId, Guid expenseCategoryId, bool balanceRollup, bool includeCategory)
        {
            var sum = 0m;

            var aggregateField = balanceRollup ? "new_balanceowed" : "new_monthlypayment";

            var fetchXML = @"<fetch distinct='false' mapping='logical' aggregate='true' >" +
                  "<entity name='new_expense' >" +
                    "<attribute name='" + aggregateField + "' aggregate='sum' alias='sum' />" +
                    "<filter type='and' >" +
                      "<condition attribute='statecode' operator='eq' value='0' />" +
                      "<condition attribute='new_applicationid' operator='eq' value='" + applicationId + "' />" +
                    "</filter>";

            var endFetch = "</entity></fetch>";

            if (includeCategory)
            {
                var categoryFetch = @"<link-entity name='new_expensetype' from='new_expensetypeid' to='new_expensetypeid' link-type='inner' alias='ExpenseType' >" +
                      "<link-entity name='new_expensecategory' from='new_expensecategoryid' to='new_categoryid' link-type='inner' alias='ExpenseCategory' >" +
                        "<filter type='and' >" +
                          "<condition attribute='new_expensecategoryid' operator='eq' value='" + expenseCategoryId + "' />" +
                        "</filter>" +
                      "</link-entity>" +
                    "</link-entity>";

                fetchXML += categoryFetch + endFetch;
            }
            else
            {
                fetchXML += endFetch;
            }

            FetchExpression fetch = new FetchExpression(fetchXML);

            try
            {
                EntityCollection aggregate = service.RetrieveMultiple(fetch);

                foreach (var c in aggregate.Entities)
                {
                    if (((AliasedValue)c["sum"]).Value is Money)
                    {
                        sum = ((Money)((AliasedValue)c["sum"]).Value).Value;
                        //tracer.Trace("Sum of payments is: {0}", sum);
                    }
                }
            }
            catch (Exception e)
            {
                //tracer.Trace(e.Message);
                throw new InvalidPluginExecutionException("Error returning aggregate value for " + aggregateField + ": " + e.Message);
            }

            return sum;
        }
    }
}

Workflow definition:

Powerapps workflow screenshot

Configuration of custom inputs:

Configuring custom inputs

Testing

After building and registering my workflow assembly, I created expense entries for all expense types ensuring that all expense categories were represented. The following images depict the successful aggregation of payments and balances:

Screenshot of expense output

Screenshot of debt output

Conclusion

Custom workflow activities are a powerful tool that balances the need for a highly maintainable solution after deployment, with complex requirements that need a coded solution. The design gives end users the flexibility to adapt their data collection needs over time as their requirements change, and they can do so with little or no involvement from IT.

As I mentioned above, an alternative approach to this requirement could involve writing a plugin to perform the calculations. This approach would still require some entity-based configurations for flexibility but would suffer from limited end-user configuration needed if or when requirements change. I can also update the custom workflow activity to accept the four aggregate fields as optional arguments to the workflow. Doing so would enable users to run separate workflow processes for each expense type/category, giving them additional configuration control over these automated calculations.

Kubernetes logoIf you’ve worked in software development or IT for any amount of time, chances are you’ve at least heard about containers…and maybe even Kubernetes.

Maybe you’ve heard that Google manages to spin up two billion containers a week to support their various services, or that Netflix runs its streaming, recommendation, and content systems on a container orchestration platform called Titus.

This is all very exciting stuff, but I’m more excited to write and talk about these things now more than ever before, for one simple reason: We are finally at a point where these technologies can make our lives as developers and IT professionals easier!

And even better…you no longer have to be a Google (or one of the other giants) employee to have a practical opportunity to use them.

Containers

Before getting into orchestrators and what they actually offer, let’s briefly discuss the fundamental piece of technology that all of this is depends on – the container itself.

A container is a digital package of sorts, and it includes everything needed to run a piece of software.  By “everything,” I mean the application code, any required configuration settings, and the system tools that are normally brought to the table by a computer’s operating system. With those three pieces, you have a digital package that can run a software application in isolation across different computing platforms because the dependencies are included in that package.

And there is one more feature that makes containers really useful – the ability to snapshot the state of a container at any point. This snapshot is called a container “image.” Think of it in the same way you would normally think of a virtual machine image, except that many of the complexities of capturing the current state of a full-blown machine image (state of the OS, consistency of attached disks at the time of the snapshot, etc.) are not present in this snapshot.  Only the components needed to run the software are present, so one or a million instances can be spun-up directly from that image, and they should not interfere with each other.  These “instances” are the actual running containers.

So why is that important? Well, we’ve just alluded to one reason: Containers can run software across different operating systems (various Linux distributions, Windows, Mac OS, etc.).  You can build a package once and run it in many different places. It should seem pretty obvious at this point, but in this way, containers are a great mechanism for application packaging and deployment.

To build on this point, containers are also a great way to distribute your packages as a developer.  I can build my application on my development machine, create a container image that includes the application and everything it needs to run, and push that image to a remote location (typically called a container registry) where it can be downloaded and turned into one or more running instances.

I said that you can package everything your container needs to run successfully, but the last point to make is that the nature of the container package gives you a way to enforce a clear boundary for your application and a way to enforce runtime isolation. This feature is important when you’re running a mix of various applications and tools…and you want to make sure a rogue process built or run by someone else doesn’t interfere with the operation of your application.

Container Orchestrators

So containers came along and provided a bunch of great benefits for me as a developer.  However, what if I start building an application, and then I realize that I need a way to organize and run multiple instances of my container at runtime to address the expected demand?  Or better yet, if I’m building a system comprised of multiple microservices that all need their own container instances running?  Do I have to figure out a way to maintain the desired state of this system that’s really a dynamic collection of container instances?

This is where container orchestration comes in.  A container orchestrator is a tool to help manage how your container instances are created, scaled, managed at runtime, placed on underlying infrastructure, communicate with each other, etc.  The “underlying infrastructure” is a fleet of one or more servers that the orchestrator manages – the cluster.  Ultimately, the orchestrator helps manage the complexity of taking your container-based, in-development applications to a more robust platform.

Typically, interaction with an orchestrator occurs through a well-defined API, and the orchestrator takes up the tasks of creating, deploying, and networking your container instances – exactly as you’ve specified in your API calls across any container host (servers included in the cluster).

Using these fundamental components, orchestrators provide a unified compute layer on top of a fleet of machines that allows you to decouple your application from these machines. And the best orchestrators go one step further and allow you to specify how your application should be composed, thus taking the responsibility of running the application and maintaining the correct runtime configuration…even when unexpected events occur.

VIEW OUR AZURE CAPABILITIES
Since 2009, AIS has been working with Azure honing our capabilities and offerings. View the overview of our Azure-specific services and offerings.

Kubernetes

Kubernetes is a container orchestrator that delivers the capabilities mentioned above. (The name “Kubernetes” comes from the Greek term for “pilot” or “helmsman of a ship.”) Currently, it is the most popular container orchestrator in the industry.

Kubernetes was originally developed by Google, based in part on the lessons learned from developing their internal cluster management and scheduling system Borg.  In 2014, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) which open-sourced the project to encourage community involvement in its development. The CNCF is a child entity of the Linux Foundation and operates as a “vendor-neutral” governance group. Kubernetes is now consistently in the top ten open source projects based on total contributors.

Many in the industry say that Kubernetes has “won” the mindshare battle for container orchestrators, but what gives Kubernetes such a compelling value proposition?  Well, beyond meeting the capabilities mentioned above regarding what an orchestrator “should” do, the following points also illustrate what makes Kubernetes stand out:

  • The largest ecosystem of self-driven contributors and users of any orchestrator technology facilitated by CNCF, GitHub, etc.
  • Extensive client application platform support, including Go, Python, Java, .NET, Ruby, and many others.
  • The ability to deploy clusters across on-premises or the cloud, including native, managed offerings across the major public cloud providers (AWS, GCP, Azure). In fact, you can use the SAME API with any deployment of Kubernetes!
  • Diverse workload support with extensive community examples – stateless and stateful, batch, analytics, etc.
  • Resiliency – Kubernetes is a loosely-coupled collection of components centered around deploying, maintaining and scaling workloads.
  • Self-healing – Kubernetes works as an engine for resolving state by converging the actual and the desired state of the system.

Kubernetes Architecture

A Kubernetes cluster will always include a “master” and one or more “workers”.  The master is a collection of processes that manage the cluster, and these processes are deployed on a master node or multiple master nodes for High Availability (HA).  Included in these processes are:

  • The API server (Kube-apiserver), a distributed key-store for the persistence of cluster management data (etcd)
  • The core control loops for monitoring existing state and management of desired state (Kube-controller-manager)
  • The core control loops that allow specific cloud platform integration (Cloud-controller-manager)
  • A scheduler component for the deployment of Kubernetes container groups, known as pods (Kube-scheduler)

Worker nodes are responsible for actually running the container instances within the cluster.  In comparison, worker nodes are simpler in that they receive instructions from the master and set out serving up containers.  On the worker node itself, there are three main components installed which make it a worker node in a Kubernetes cluster: an agent called kubelet that identifies the node and communicates with the master, a network proxy for interfacing with the cluster network stack (kube-proxy), and a plug-in interface that allows kubelet to use a variety of container runtimes, called the container runtime interface.

diagram of Kubernetes architecture

Image source

Managed Kubernetes and Azure Kubernetes Service

“Managed Kubernetes” is a deceptively broad term that describes a scenario where a public cloud provider (Microsoft, Amazon, Google, etc.) goes a step beyond simply hosting your Kubernetes clusters in virtual machines to take responsibility for deploying and managing your cluster for you.  Or more accurately, they will manage portions of your cluster for you.  I say “deceptively” broad for this reason – the portions that are “managed” varies by vendor.

The idea is that the cloud provider is:

  1. Experienced at managing infrastructure at scale and can leverage tools and processes most individuals or companies can’t.
  2. Experienced at managing Kubernetes specifically, and can leverage dedicated engineering and support teams.
  3. Can add additional value by providing supporting services on the cloud platform.

In this model, the provider does things like abstracting the need to operate the underlying virtual machines in a cluster, providing automation for actions like scaling a cluster, upgrading to new versions of Kubernetes, etc.

So the advantage for you, as a developer, is that you can focus more of your attention on building the software that will run on top of the cluster, instead of on managing your Kubernetes cluster, patching it, providing HA, etc. Additionally, the provider will often offer complementary services you can leverage like a private container registry service, tools for monitoring your containers in the cluster, etc.

Microsoft Azure offers the Azure Kubernetes Service (AKS), which is Azure’s managed Kubernetes offering. AKS allows full production-grade Kubernetes clusters to be provisioned through the Azure portal or automation scripts (ARM, PowerShell, CLI, or combination).  Key components of the cluster provisioned through the service include:

  • A fully-managed, highly-available Master. There’s no need to run a separate virtual machine(s) for the master component.  The service provides this for you.
  • Automated provisioning of worker nodes – deployed as Virtual Machines in a dedicated Azure resource group.
  • Automated cluster node upgrades (Kubernetes version).
  • Cluster scaling through auto-scale or automation scripts.
  • CNCF certification as a compliant managed Kubernetes service. This means it leverages the Cloud-controller-manager standard discussed above, and its implementation is endorsed by the CNCF.
  • Integration with supporting Azure services including Azure Virtual Networks, Azure Storage, Azure Role-Based Access Control (RBAC), and Azure Container Registry.
  • Integrated logging for apps, nodes, and controllers.

Conclusion

The world of containers continues to evolve, and orchestration is an important consideration when deploying your container-based applications to environments beyond “development.”  While not simple, Kubernetes is a very popular choice for container orchestration and has extremely strong community support.  The evolution of managed Kubernetes makes using this platform more realistic than ever for developers (or businesses) interested in focusing on “shipping” software.

I recently encountered an issue when trying to create an Exact Age column for a contact in Microsoft Dynamics CRM. There were several solutions available on the internet, but none of them was a good match for my specific situation. Some ideas I explored included:

  1. Creating a calculated field using the formula DiffInDays(DOB, Now()) / 365 or DiffInYears(DOB, Now()) – I used this at first, but if the calculated field is a decimal type, then you end up with a value like 23 years old which is not desirable. If the calculated field is a whole number type, then the value is always the rounded value. So, if the DOB is 2/1/1972 and the current date is 1/1/2019, the Age will be 47 when the contact is actually still 46 until 2/1/2019.
  2. Using JavaScript to calculate the Age – The problem with this approach is that if the record is not saved, then the data becomes stale. This one also does not work with a view (i.e., if you want to see a list of client ages). The JavaScript solution seems more geared towards the form of UI experience only.
  3. Using Workflows with Timeouts – This approach seemed a bit complicated and cumbersome to update values daily across so many records.

Determining Exact Age

Instead, I decided to plug some of the age scenarios into Microsoft Excel and simulate Dynamic CRM’s calculations to see if I could come up with any ideas.

Note: 365.25 is used to account for leap years. I originally used 365, but the data was incorrect. After reading about leap years, I decided to plug 365.25 in, and everything lined up.

Excel Formulas

Setting up the formulas above, I was able to calculate the values below. I found that subtracting the DATEDIF Rounded value from the DATEDIF Actual value produced a negative value when the month/day was after the current date (2/16/2019 at the time). This allowed me to introduce a factor of -1 when the Difference was less than or equal to 0.  Using this finding, I set up the solution in CRM.

Excel Calculations

The Solution

  1. Create the necessary fields.
    Field  Data Type  Field Type  Other  Formula 
    DOB  Date and Time  Simple  Behavior: User Local   
    Age Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(new_dob, Now()) / 365.25 
    Age Rounded  Whole Number  Calculated    DiffInDays(new_dob, Now()) / 365.25 
    Age Difference  Decimal Number  Calculated  Precision: 10  new_ageactual – new_agerounded 
    Age  Whole Number  Calculated    See below 
  1. Create a business rule for DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Age calculated field as follow:
    Age Calculated Field

Once these three steps have been completed, your new Age field should be ready to use. I created a view to verify the calculations. I happened to be writing this post very late on the night of 2/16/2019. I wrote the first part before 12:00 a.m., then I refreshed the view before taking the screenshot below. I was happy to see Age Test 3 record flip from 46 to 47 when I refreshed after 12:00 a.m.

Age Solution Results

Determining Exact Age at Some Date in the Future

The requirement that drove my research for this solution was the need to determine the exact age in the future. Our client needed to know the age of a traveler on the date of travel. Depending on the country being visited and the age of the traveler on the date of departure, different forms would need to be sent in order to prevent problems when the traveler arrived at his or her destination. The solution was very similar to the Age example above:

The Solution

  1. Here is an overview of the entity hierarchy:
    Age at Travel Entities
  2. Create the necessary fields.
    Entity  Field  Data Type  Field Type  Other  Formula 
    Trip  Start Date  Date and Time  Simple  Behavior: User Local   
    Contact  DOB  Date and Time  Simple  Behavior: User Local   
    Trip Contact  Age at Travel Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Rounded  Whole Number  Calculated  n/a  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Difference  Decimal Number  Calculated  Precision: 10  new_ageattravelactual – new_ageattravelrounded 
    Trip Contact  Age at Travel  Whole Number  Calculated  n/a  See below 
  1. Create a business rule for Contact DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Trip Contact’s Age at Travel calculated field as follow:
    Age at Travel Calculated Field

Once these steps have been completed, your new Age at Travel field should be ready to use. I created a view to verify the calculations.

You’ll notice that in the red example, the trip starts on 8/14/2020. The contact was born on 9/29/2003 and is 16 on the date of travel but turns 17 a month or so later. In the green example, the trip is also on 8/14/2020. The contact was born 4/12/2008 and will turn 12 before the date of travel.

Age at Travel Solution Results

Conclusion

While there are several approaches to the Age issue in Dynamics CRM, this is a great alternative that requires no code and works in real time. I hope you find it useful!

kubernetes logoToday, let’s talk about network isolation and traffic policy within the context of Kubernetes.

Network Policy Specification

Kubernetes’ first-class notion of networking policy allows a customer to determine which pods are allowed to talk to other pods. While these policies are part of Kubernetes’ specification, tools like Calico and Cilium implement these network policies.

Here is a simple example of a network policy:

...
  ingress:
  - from:
    - podSelector:
        matchLabels:
          zone: trusted
  ...

In the above example, only pods with the label zone: trusted are allowed to make an incoming visit to the pod.

egress:
  - action: deny
    destination:
      notSelector: ns == 'gateway’

The above example deals with outgoing traffic. This network policy will ensure that traffic going out is blocked unless the destination is a node with the label ‘gateway’.

As you can see, network policies are important for isolating pods from each other in order to avoid leaking information between applications. However, if you are dealing with data that requires higher trust levels, you may want to consider isolating the applications at the cluster level. The following diagrams depict both logical (network policy based) and physical (isolated) clusters.

Diagram of a Prod Cluster Diagrams of Prod Team Clusters

Network Policy is NOT Traffic Routing…Enter Istio!

Network policies, however, do not allow us to control the flow of traffic on a granular level. For example, let’s assume that we have three versions of a “reviews” service (a service that returns user reviews for a given product). If we want the ability to route the traffic to any of these three versions dynamically, we will need to rely on something else. In this case, let’s use the traffic routing provided by Istio.

Istio is a tool that manages the traffic flow across services using two primary components:

  1. An Envoy proxy (more on Envoy later in the post) distributes traffic based on a set of rules.
  2. The Pilot manages and configures the traffic rules that let you specify how traffic should be routed.

Diagram of Istio Traffic Management

image source

Here is an example of Istio policy that directs all traffic to the V1 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

Here is a Kiali Console view of all “live” traffic being sent to the V1 version of the “reviews” service:

Kiali console screenshot

Now here’s an example of Istio policy that directs all traffic to the V3 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v3

And here is a Kiali Console view of all “live” traffic being sent to the V3 version of the “reviews” service:

Kiali console screenshot v3

Envoy Proxy

Envoy is a lightweight proxy with powerful routing constructs. In the example above, the Envoy proxy is placed as a “sidecar” to our services (product page and reviews) and allows it to handle outbound traffic. Envoy could dynamically route all outbound calls from a product page to the appropriate version of the “reviews” service.

We already know that Istio makes it simple for us to configure the traffic routing policies in one place (via the Pilot). But Istio also makes it simple to inject the Envoy proxy as a sidecar. The following Kubectl command labels the namespace for automatic sidecar injection:

#--> Enable Side Car Injection
kubectl label namespace bookinfo istio-injection=enabled

As you can see each pod has two containers ( service and the Envoy proxy):

# Get all pods 
kubectl get pods --namespace=bookinfo

I hope this blog post helps you think about traffic routing between Kubernetes pods using Istio and Envoy. In future blog posts, we’ll explore the other facets of a “service mesh” – a common substrate for managing a large number of services, with traffic routing being just one facet of a service mesh.