Implementing a cloud strategy and modernization of legacy systems are two of the core objectives of most IT organizations. Purchasing a COTS product or SaaS offering can speed up modernization and come with a lot of advantages. COTS products come with a proven track record and address specific business needs that would be difficult to justify building your own. COTS product shifts the liability of creating features to the COTS product instead of your organization. Finally, COTS products promise a shorter timeframe to implementation. Even though you’ll be purchasing a solution to 80% of your problem with the COTS product, the hardest parts of implementing a COTS product are still your responsibility. Below are some key areas you will still own.

Security and Infrastructure

Security and Infrastructure are your responsibility. Off the shelf product or Software as a Service (SaaS) product won’t address this. If this is a SaaS product, how will your Hybrid network access it and how is that access govern? You’ll need to do a risk assessment of this SaaS product which includes how you connect to it, how it stores its data, and even how it’s creating the software. If this is an off the shelf product, how will it be installed in the cloud? Ask if it can is cloud-native or does it need to run on virtual machines in the cloud. If it’s running on virtual machines are those hardened and who have access to them. Cloud virtual machines complicate networking since they need to be accessed from on-prem and they may still need to reach into the cloud or the internet. That can leave a wide surface vector you’ll need to account for. Security and Infrastructure is the biggest concern and you’ll need to own it.


One of the promises of moving to the cloud is gaining business agility. Automation is a key component for reaching that goal. SaaS product removes the burden from deploying and maintaining the application, but there may still be a necessity to automate some aspects. For example, if the SaaS product must be configured, it might have a UI and an API for doing this. It’s in your best interest to involve this SaaS Product into your normal development pipeline and consider infrastructure as code as it applies to the SaaS product. If you purchase a COTS produce be sure you can stand up an entire environment including installation of this product with a click of a button. There is no excuse for not automating everything and there are plenty of tools in the Azure DevOps pipeline for integration and automation.


COTS product provides 80% of the functionality needed, but what about the other 20%? The remaining functionality the product doesn’t provide is likely what differentiates your company from the others. There are always edge cases and custom functionalities that you’ll want, that either the vendor does not provide or it’s expensive for the vendor to implement. You’ll need to bring that work in-house or hire a system integrator to hook things together. Whether it’s a SaaS product or a COTS product, integration to applications that provide the remainder of your functionally is your responsibility as well as understanding how integration will work when purchasing the COTS product. A good COTS product will have a solid HTTP REST API. If the product you’re purchasing doesn’t have a simple API, consider a wrapper to make the integration easier. API Management is an Azure service that can do that translation for you. You might find that you want to combine COTS API to one that makes sense to the rest of the systems. Large COTS products should also support messaging of some kind. Messaging helps build loosely coupled components with high cohesion. COTS products might offer file-based and database integration. However, these should be avoided. Integration of the COTS product is your responsibility and the effort can equal the implementation effort of the COTS product itself.


COTS products can provide great benefits to your company and welcome new functionality quickly. Understand that your IT department will still have to drive the overall architecture and you are responsible for everything around the COTS product. The bulk of this work will fall under Security, Infrastructure, Automation, and Integration. Focus on these concerns and you’ll have a successful implementation.

While personnel management is a sub-category of Human Resources (HR) that only focuses on administration, the tasks and responsibilities can outstretch the duties of an HR manager. Personnel managers hold an important role by focusing on hiring and developing employees to become more valuable to the company.  

A few of these areas of interest include: 

  • Job analyses
  • Strategic personnel planning
  • Performance appraisals
  • Benefit coordination
  • Recruitment
  • Screening
  • New employee orientation
  • Training
  • Wages
  • Dispute resolution
  • Other record-keeping duties

PowerApps and Personnel Management 

Now I bet you’re thinking how this could tie in with PowerApps. With the various areas that a personnel manager can be involved in, doesn’t it make sense to have one application where everything exists? So that this busy personnel manager can easily navigate and participate in day-to-day duties with ease, get the job done more efficiently, and have it readily available for other team members to view and analyze.

How bizarre would it be if I told you we could build this application with little to no code and have it ready to be used in less than half the time it would take for a developer to code it from scratch? Not bizarre and very doable. With PowerApps, one can quickly build custom business applications that connect to your business data stored either in the data platform, Common Data Service for Apps, or in various online and on-premise data sources like Azure, SharePoint, Excel, Office 365, Dynamics, SQL Server, and so on.  

Why PowerApps?

Apps that are built using PowerApps transform your manual business processes to digital, automated processes. Even more good news – these apps will have a responsive design and can run on any browser or your mobile device. PowerApps will potentially alleviate the need to hire expensive custom developers and this will give you the power and tools necessary to move your business forward.

Let’s Take a Closer Look

If a personnel manager is doing the following, this is how PowerApps can be integrated:

Personnel Management Duty: Posting job ads, reviewing resumes, conducting interviews and making a final decision with management.

PowerApps Solution: This can be done through the Business Process Flow. As you can see with the example below, you will be able to ensure that users enter data consistently and are taken through the same steps every time they work through this type of process.

Stages in Business Process Flow

Personnel Management Duty: Analyze salary data and reports to determine competitive compensation rates.

PowerApps Solution: Power BI, a modern data visualization tool that can spot trends in real-time and make better, more informed decisions based on your specified dataset. This example below depicts the various ways to display data using custom visualizations. Imagine the possibilities!

Sales Dashboard

Personnel Management Duty: Develop and maintain a human resources system that meets the company’s information needs.

PowerApps Solution: Using Dynamics 365, an app within PowerApps. Through the unified interface, your organization will have an application that is easy to use with the flexibility to grow.

 Personnel Management Duty

Personnel Management Duty: Continually monitor changing laws, legislation movements, arbitration decisions and collective bargaining contracts.

PowerApps Solution: Dashboard management that Dynamics 365 offers can easily check for recent changes within your system.

Sales Activity Dashboard

Personnel Management Duty: Continually deliver presentations to management and executives regarding current and future human resources policies and practices.

PowerApps Solution: Use the PowerApps Unified Interface to present detailed reports, dashboards, and forms. You’ll be able to demonstrate the versatility of the application on various devices.

 Customizing Applications

PowerApps not only gives you the capability to drive your business growth but it also eases your mind on the ability to change, update, delete, and customize your application as you see fit. Personnel management is not a simple feat but, using PowerApps can make your mission more manageable while also keeping everything in one place.

When Agencies decide to move their applications to the commercial cloud, the Defense Information Systems Agency (DISA) mandates specific approval and certification to connect the Cloud Access Point (CAP).  In this blog post, I will describe the process for establishing connectivity for a mission application on Azure Government IL 5 environment to DoD the DISA CAP. This post is based on the document provided by DISA DoD Cloud Connection Process Guide. The site has information including connecting to the DoD DISA CAP for both cloud providers (AWS, AZURE …) and cloud consuming applications (“Mission Applications”). Unfortunately, I found the document to be outdated and hard to follow. 

Here are some assumptions that you may have already done or are doing before requesting the CAP connection:

  • Designed network architecture  
  • Identified the components used in the mission application (VM, OS, Application, Database, data at rest/transit, vulnerability compliance report) 
  • Performed assessment and authorization such as:
    • Start applying STIGs to all servers. This process may take time, and you maybe continue to implement STIGs beyond the CAP connection
    • Deployed agent for HostBased Security Service (HBSS)
    • Applied patches to all VMs
    • Approved for an Interim Authority to Test (IATT) or Authority to Operate (ATO)

Registration Process 

Here is the starting point for the DISA CAP connection journey:  

  1. DISA Systems/Network Approval Process (SNAP) Registration – The SNAP database stores the required documents and provides workflow status. To obtain a SNAP account go to site for registration (CAC required). The registration will ask for materials such as a DoD 2875 System Authorization Access Request (SAAR) form with Justification for Access. The diagram below shows detail steps for getting the SNAP account.detailed steps for getting the SNAP account
  2. SNAP Project Registration
    • Register C-ITPS Project
    • Submit Consent to Monitor Agreement
    • Submit a Business Case Analysis
    • Submit an Initial Contact form
  3. Develop SNAP artifact package
    • Input the Ports, Protocols, and Services Management (PPSM) Registration Number – Obtained from the DoD PPSM Registry
    • DoD NIC IP Range – Request the IP address space from DISA Network Information Center (NIC)
    • Interim Authorization to Test (IATT) or an Authorization to Operate (ATO) Memo – Received a formal statement from an Authorizing Official (AO)
    • Authorizing Official (AO) Memo – AO Appointment Letter
    • Cybersecurity Service Provider (CSSP) Agreement
    • Network Diagram of the application
    • Plan of Actions & Milestones (POA&M) report
    • Security Assessment Report (SAR)
    • System Security Plan (SSP)
  4. Obtain Connection Approval
    • Submit the Package within SNAP
    • Receive Cloud Permission to Connect (CPTC) within five business days
    • Acknowledgment of Registration – DISA Cloud Approval Office (CAO) issues a CPTC Memo (or returns package for rework/resubmission)
  5. Complete Secure Cloud Computing Architecture (SCCA) Onboarding
    • Complete SCCA Onboarding Form
    • Complete DISA Firewall Request Sheet
    • Complete Express Route Steps
  6. Technical exchange meeting with DISA Engineering Team
    • Provided Service key and Peering location
    • Received shared key from the DISA

Connect to Authorized CSP-CSO (Azure)  

The steps for connecting via DISA enterprise BCAP depends on the impact level. The following steps apply for both Level 4 and 5. 

  1. Obtain IP Addresses – Requested DoD IP address range for the mission application from DoD NIC 
  2. Obtain DNS name with A record (forward lookup) and PTR (reverse DNS) for:  
    • Application
    • Federation server (ADFS)
    • Mail server 
  3. Obtain certificates for the DNS and server certificate for each VM
  4. Configure the app to use the Enterprise Email Security Gateway (EEMSG)

Create and Modify Azure ExpressRoute Circuit 

  1. Create ExpressRoute following these steps:
    Create ExpressRoute Circuit
  2. Send the service key, Peering location to the DISA BCAP team
    Sending Service Key
  3. Periodically check the status and the state of the circuit key. When the Circuit status changed to Enabled and Provider status change to Provisionedthen the connection is established.
  4. Provision Virtual Network 
  5. Change the DNS Servers to Custom and provide the following IP addressed so that traffic can pass through CAP connection (,, 
  6. Link the Virtual Network to the Express Route Circuit

AIS has a team specialized in navigating the process of getting ATO and connecting to DISA CAP. Contact us today and let’s start the conversation of getting you connected to DISA CAP.

It’s been a transformational year at AIS. We worked on some incredible projects with great clients, partners, and co-workers. We learned a lot! And we enjoyed telling you all about it here on the AIS Blog.

As we close out the year, here are the top 10 most read and shared posts of 2019*:

  1. Federated Authentication with a SAML Identity Provider, by Selvi Kalaiselvi
  2. Newly Released JSON Function for Power Apps, by Yash Agarwal
  3. So, You Want to Rehost On-Premises Applications in the Cloud? Here’s How., by Nasir Mirza
  4. Highly-Available Oracle DB on Azure IaaS, by Ben Brouse
  5. The New Windows Terminal – Install, Interact, and Customize, by Clint Richardson
  6. Cloud Transformation Can Wait… Get Me to the Cloud Now!, by Vishwas Lele
  7. HOW TO: Create an Exact Age Field in Microsoft PowerApps and Dynamics, by Chris Hettinger
  8. SharePoint Framework (SPFx) Innovation Project Part I, by Nisha Patel, Elaine Krause, and Selvi Kalaiselvi
  9. Azure Sentinel: A Tip of the Microsoft Security Iceberg, by Benyamin Famili
  10. What Is API Management?, by Udaiveer Virk

Happy New Year to all our readers and bloggers! Be sure to follow AIS on Twitter, Facebook or LinkedIn so you’ll never miss an insight. Perhaps you’ll even consider joining our team in 2020?

*We feature each of our bloggers once on the top 10 list, but we had a few top posts we would be remiss not to mention, including another blog from Yash Agarwal on How To Use Galleries in Power Apps and two more posts from Vishwas Lele, Oracle Database as a Service on Azure (Almost!) and Traffic Routing in Kubernetes via Istio and Envoy Proxy. Enjoy!

In this article, I will show you how we can create reusable custom components in canvas apps. We will work on two simple limitations on the current controls in Canvas Apps and create custom components to overcome those limitations.

1. A specific range cannot be set on the date picker control that would limit the users from selecting dates from a specific range only.

2. Power Apps currently limits the text input control to two formats i.e., text and number. We will create a custom component to support ‘regex’ as a general setting that can be applied to multiple input fields of the same format. Let’s get started. We will first create the components and then see how to use them on the canvas app.

Creating Custom Components

To create the components in the Power Apps app studio, we first have to enable ‘Components’ in the experimental features tab from the advanced settings of the app.

Creating Custom Components

Date Picker

The current date control allows you to restrict only the year while configuring the control. Using already available controls, we can design a component for the date control that enables restricting to a range of dates. The display of the date control component is based on a gallery and is explained in detail here. We will start by creating a few custom properties for the component. The idea behind this date control is to enable functionality to restrict users to select the date from a specified range. For this, I created a “start date” and an “end date” property of data-type “date and time”. I have also added properties to customize the color of the different sub-controls of the component.

Date Picker

1. This is a label that displays the date selected from the custom date picker control component. The expression used on the “Text” property of this label:
Selected Date Label

2. This is an icon (‘calendar’) to expand the date picker. The expression used on the “OnSelect” property of this control:

Icon Calendar

Explanation: The visibility of the calendar is set here based on the “CalendarVisibility” variable. “SelectedRange” is a variable that sets the context for the appropriate month to be displayed on the date picker.

3. Upon clicking this icon, the user is navigated to the previous month. The expression used on the “OnSelect” property of this control is:

Previous Month Navigation

Explanation: This evaluates the month label based on the current month on the date picker and the result is the previous month w.r.t to the current month. The expression used on the “DisplayMode” property of this control:

Display Mode

4. This is a label control and displays the selected month and year on the date picker.

5. Upon clicking this icon, the user is navigated to the next month based on the current month. The expression used on the “OnSelect” property of this control is:

On Select

Explanation: This evaluates the month label based on the current month on the date picker and the result is the next month w.r.t to the current month. The expression used on the “DisplayMode” property of this control:

Display Mode

6. This is a gallery of buttons to show days week wise in a row. The expression used on the “OnSelect” property of the button inside of the gallery is:

On Select

Explanation: The “dateselected” variable value is set by clicking this button. The expression used on the “DisplayMode” property of this button control is:

Expression used on DisplayMode

Explanation: Here is the date validation that checks if the date is within the range of the “start date” and “end date” properties and is of the currently selected month. The expression used on the “Text” property of the button control is: (This expression has been referred from this reference link.)

Date Validation

Explanation: This gets the date as per the day from the date picker. The expression used on the “Items” property of the gallery control is: (This expression has been referred from this reference link.)

Items property gallery control

Explanation: This created the 42 cells for the date picker inside of the gallery control. The expression used on the “TemplateFill” property of the gallery control is:

Template Fill Property

Explanation: This highlights the currently selected item (date) in the gallery control.

The selected date from this component can be accessed through the “SelectedDate” property. Note: The color properties defined are used in the component on all the individual controls to sync the color coordination amongst them.


The current text input control allows you to choose the input text format as a number or text only. We can design a component that allows implementing regex and based on the property (formatting required) the text input can be formatted or restrict the user to input the text in the type defined in the component. If the input text does not match the type chosen on the component, the field is reset, and a warning message is displayed to the user.


7. This is a text input control and the text format is set through the “InputFormat” custom property created on this component. The expression used on the “OnChange” property of this control is:

Setting Text Format

Explanation: The “if” statement checks the “InputFormat” custom property for the text input against the respective regex statement and resets the input if the regex is not matched. The expression used on the “Mode” property of this control is:

Expression on Mode Property

Explanation: If the “InputFormat” on this component is set to “Password” then the mode will be set to password in order to mask the user input.

8. This is a label that is used to display the warning message to the user in case of a mismatch of the input based on the “InputFormat” property.

9. This is an icon used to display the warning sign to the user in case of a mismatch of the input based on the “InputFormat property.


This is an “InputFormat” and the regex that this type of input is being compared to is:

Input Format Comparison


This is an “InputFormat” and the regex that this type of input is being compared to is:

URL Input Format


This is an “InputFormat” and the regex that this type of input is being compared to is:

Password Input Comparison

The password should include at least one or more of an upper case, lower case letters, a number, and a special character.

Number Format:

Number Format

10. This is a label that displays the converted value based on the formatting defined for the input number. The expression used on the “Text” property of this control is:

Display Converted Value based on formatting

Explanation: If the text input is not empty, then the numbers entered will be formatted as “XXX-XXXXXX”

11. This is a text input control and the user will enter the number in this control. The “Format” property of this input is set to “Number” Note: The label in 10 is positioned in a way that it overlaps the text input control in 11. That is how the user can see the input formatted as set in the label.

Using the Custom Components

Currently, Power Apps restricts the use of a component in galleries, data cards, data tables, etc. We will design a custom sign up form and use the components created above to register a user and save the information on the data source.

Custom Sign Up Form

12. This is a regular text input field that will allow users to enter simple text. The property of this control is set to “text”.

13. This is a component that we have created earlier and the property for this is set to “URL”. When users enter the URL, it is matched against the regex and if it is not in a valid URL format the field is reset.

14. This field expects the input to be in the form of an email address. The component created earlier is used for this control and the property is set to “Email”. If the input does not match against the regex for email, the field will be reset and a warning message will be displayed.

15. The property on the component for this is set to “Password”. If the input does not match as the regex set for the password (as mentioned next to the “Password” label), the field will be reset and the warning message will be displayed.

16. This input field expects the input in the form of a “Phone Number” and appends the ‘-’ to the numbers entered based on the regex configured while creating the component.

17. The custom property created for the component can be set here. For each of the above fields, the type of input (regex expression) is defined by setting this property.

Sign Up Form18. This is the date control component and allows the user to pick a date by clicking on the calendar icon.

19. The custom properties for the date control component can be set here. The start date and the end date are configured to set a range of allowable dates. As an added feature, the colors for the individual controls within the date picker can also be configured here.

20. This is a clear field button and upon clicking this all the fields are cleared and the user creates a new entry.

21. This is a submit button and the user can submit their details by filling the form and pressing this button. Upon pressing this button, the details filled by the user are patched to the respective columns in the data source.

22. This is a warning generated for an inappropriate email address format that was entered by the user. As soon as the user clicks outside of this control, the field is reset.

23. This is the expanded date picker control and the dates that can be selected by the user are based on the start and end dates range configured in the app. The dates that cannot be selected are highlighted by the strikethrough and the buttons to navigate to the next/ previous months is disabled.

In this article, we have seen how we can create custom reusable components in canvas apps with low code and no code methodology. Components can be created with already available controls to overcome certain limitations and enhance the overall application.

Note: This blog post is *not* about Kubernetes infrastructure API (an API to provision a Kubernetes cluster). Instead, this post focuses on the idea of Kubernetes as a common infrastructure layer across private and public clouds.

Kubernetes is, of course, well known as the leading open-source system for automating deployment and management of containerized applications. However, its uniform availability, is for the first time, giving customers a “common” infrastructure API across public and private cloud providers. Customers can take their containerized applications, Kubernetes configuration files, and for most parts, move to another cloud platform. All of this without sacrificing the use of cloud provider-specific capabilities, such as storage and networking, that are different across each cloud platform.

At this time, you are probably thinking about tools like Terraform and Pulumi that have focused on abstracting underlying cloud APIs. These tools have indeed enabled a provisioning language that spans across cloud providers. However, as we will see below, Kubernetes “common” construct goes a step further – rather than be limited to a statically defined set of APIs, Kubernetes extensibility allows or extends the API dynamically through the use plugins, described below.

Kubernetes Extensibility via Plugins

Kubernetes plugins are software components that extend and deeply integrate Kubernetes with new kinds of infrastructure resources. Plugins realize interfaces like CSI (Container Storage Interface). CSI defines an interface along with the minimum operational and packaging recommendations for a storage provider (SP) to implement a compatible plugin.

Another example of interfaces includes:

  • Container Network Interface (CNI) – Specifications and libraries for writing plug-ins to configure network connectivity for containers.
  • Container Runtime Interface (CRI) – Specifications and libraries for container runtimes to integrate with kubelet, an agent that runs on each Kubernetes node and is responsible for spawning the containers and maintaining their health.

Interfaces and compliant plug-ins have opened the flood gates to third-party plugins for Kubernetes, giving customers a whole range of options. Let us review a few examples of “common” infrastructure constructs.

Here is a high-level view of how a plugin works in the context of Kubernetes. Instead of modifying the Kubernetes code for each type of hardware or a cloud provider offered service, it’s left to the plugins to encapsulate the knowhow to interact with underlying hardware resources. A plugin can be deployed to a Kubernetes node as shown in the diagram below. It is the kubelet’s responsibility to advertise the capability offered by the plugin(s) to the Kubernetes API service.

Kubernetes Control Panel

“Common” Networking Construct

Consider a networking resource of type load balancer. As you would expect, provisioning a load balancer in Azure versus AWS is different.

Here is a CLI for provisioning ILB in Azure:

CLI for provisioning ILB in Azure

Likewise, here is a CLI for provisioning ILB in AWS:

CLI for provisioning ILB in AWS

Kubernetes, based on the network plugin model, gives us a “common” construct for provisioning the ILB that is independent of the cloud provider syntax.

apiVersion: V1

“Common” Storage Construct

Now let us consider a storage resource type. As you would expect, provisioning a storage volume in Azure versus Google is different.

Here is a CLI for provisioning a disk in Azure:

CLI for provisioning a disk in Azure

Here is a CLI for provisioning a persistent disk in Google:

CLI for provisioning a persistent disk in Google

Once again, under the plugin (device) model, Kubernetes gives us a “common” construct for provisioning storage that is independent of the cloud provider syntax.

In the example below, of “common” storage construct across cloud providers. In this example, a claim for a persistent volume of size 1Gi and access mode “ReadWriteOnce” is being made. Additionally, storage class “cloud-storage” is associated with the request. As we will see next, the persistent volume claims decouple us from the underlying storage mechanism.


The StorageClass determines which storage plugin gets invoked to support the persistent volume claim. In the first example below, StorageClass represents the Azure Disk plugin. In the second example below, StorageClass represents the Google Compute Engine (GCE) Persistent Disk.


StorageClass Diagram 2

“Common” Compute Construct

Finally, let us consider a compute resource type. As you would expect, provisioning a compute resource in Azure versus GCE is different.

Here is a CLI for provisioning a GPU VM in Azure:

CLI for provisioning a GPU VM in Azure

Here is a CLI for provisioning a GPU in Google Cloud:

CLI for provisioning a GPU in Google Cloud:

Once again, under the plugin (device) model, Kubernetes, gives us a “common” compute construct across cloud providers. In this example below, we are requesting a compute resource of type GPU. An underlying plugin (Nvidia) installed on the Kubernetes node is responsible for provisioning the requisite compute resource.

requesting a compute resource of type GPU



As you can see from the examples discussed in this post, Kubernetes is becoming a “common” infrastructure API across private, public, hybrid, and multi-cloud setups. Even traditional “infrastructure as code” tools like Terraform are building on top of Kubernetes.


Let’s say you have a well-developed app written in Vue. It’s pretty big and has multiple teams working in the code base for both maintenance and feature development. Now, let’s say you need to include that app in an Angular app and you want to avoid transposing the Vue app to AngularThis recently occurred on one of our projects. The task was to incorporate a Vue.js component into an existing Angular app. Several hang-ups eventually lead to rewriting the Angular piece in Vue. The following walkthrough is to create a proof of concept to demonstrate that adding a Vue component to an Angular app is possible


I’ll be using a premade Vue Calculator to add to an Angular Tour of Heroes app. These should serve well enough as standins for the real-world problem. I’ll be using VSCode throughout, however, any text editor ought to do. 


The first step is to take inventory of the apps we’ll be using. To do that I’ll pull the calculator repo down locally and install all dependencies.  

Vue Formula

Now that the app is local and dependencies installed, I’ll check it out by serving the Vue app. 

Vue Formula 2

Then navigate to http://localhost:8080 in the browser.

Vue Calculator Snapshot

This step strictly speaking isn’t necessary, however, should something go wrong later on it’s good to know we at least started with functional components.

Now you can build this Vue app as a web component. To do this, I’ll be modifying the npm scripts located in package.json. Open package.json in VSCode and copy the build script. Paste it directly below the copied build script and change the name, I’ll be naming mine build-wc to signify that it builds a web component. Next, I’ll prepend some flags to the build-wc script.

Build WC Formula

The target flag tells the cli how to build the Vue app; I’ll use wc since we want to create a web component. Additional build target information can be found on the Vue CLI docs build targets page. The name flag will determine the output JavaScript file name in addition to the HTML tag I will use in angular where I want the calculator to appear. Finally, the last parameter is the component I want to use as the entry point. Since the Vue Calculator consists of only one component, you could use either App.vue or Calculator.vue as the final argument and have similar results. By using uCalculator.vue it illustrates how you can use a direct component instead of an entire app. Now that I have created the web component build script, I’ll run it and again inspect the results.

Refun the WC

I can now see a /dist directory with several files.

dist directory formula

I’ll install and use the npm package to serve this up and view the web component in the browser.

Install and use npm package

Now I can open the browser and navigate to

Vue Calc Demo

Opening the developer tools in Chrome and viewing the Elements tab reveals the custom <my-vue-calc></my-vue-calc> mentioned earlier; I’ll include the web component the same way when I get to the integration. For now, you can once again see the calculator working as normal. You may notice that the calculator looks slightly different from the first time the app was served. This is due to the app having been built as a web component with Calculator.vue as the entry point. This demonstrates how styling works within Vue apps. In App.vue, the calculator I am using has a style block with several styles added to the #app selector. However, since I built only the Calculator component of the larger app, I don’t get the styles from App.vue. For demo purposes, I’ll ignore the text alignment and font colors of the number keys. There are several gotchas when dealing with web-components so check out the Vue CLI docs for additional info.


Now that I am satisfied with the results of building the Vue web component I’ll pull down the Angular app to which I’ll be adding the calculator.

Angular App snapshot

And just like before, I’ll install dependencies.

Installing dependencies

I was having conflicts with the node-sass package and the version of node I had installed, v12.9.1. Using , I changed the current version of node to v11.15.0 which solved that problem. Once the app is cloned and dependencies are installed, I served the app, again, to ensure that it’s working prior to any changes I make. To accomplish this I ran the following in the root directory of the Tour of Heroes app.
ng serve snapshot


Now that the app is working I added the custom web component html tag to index.html.

Adding HTML web component

I’ve added the ‘TEST’ text as a sanity check. Since the web component isn’t defined yet, the tag defaults to the functionality of a <div>. Looking at the browser again there is now immediately following the <my-root></my-root>.
Test HTML Check in Vue

So far so good.

For organization purposes, I added a my-vue-calc directory nested in libs to hold the actual my-vue-calc.min.js code that I copied from the dist directory of the vue-calc app.
Add vue-calc app directory

In order to use the my-vue-calc component I need to load Vue.js in the Angular app. I accomplished this by installing vue as a dependency using npm.
Laod Vue to Angular App

Then adding vue to the angular.json configuration file. While I’m here I’ll also add the calculator to the scripts.
Angular Jason Configuration

Having modified angular.json you will likely have to stop and restart the server for the changes to be picked up and compiled. With that, the angular app should rebuild itself and refresh the webpage and the calculator should appear at the bottom.

Angular App Restart

Well… sort of… this is a very simplistic example and implemented in the simplest way. What if I want the web component in one of the angular components? Doing this would break up the source code a bit more and perhaps allow for lazy loading of the components. I like the idea of having a calculator on the individual hero detail pages. To achieve this, I’ll remove the custom element tag from index.html and move it to the bottom of app/hero-details.component.html. Navigating to a hero details page, right away a console error appears.

Navigating a Hero details page

The custom element should render a <div> with ‘TEST’ since I removed the web component JavaScript from the angular.json config file. To correct the error, I added the following to the @NgModule decorator.

Custom Elements formula snapshot

The CUSTOM_ELEMENTS_SCHEMA tells angular that you will be using elements that are neither Angular components nor directives in the app, check out the Angular Docs for additional info. The ‘TEST’ text now appears on the hero details page.
Test Text in hero details

The only thing left to do is import the web component JavaScript. In this case, I’ll just add an import statement to the hero-detail.component.ts.
Import Web component JavaScript

And like magic once more!

Test Text

In summary, the example in this walkthrough is absolutely contrived. I could have just as easily brought a native angular calculator app into the Tour of Heroes. However, there is no need for any calculator Vue or otherwise where I’ve added it. This walkthrough is to demonstrate the capability of adding a large well maintained Vue app to an Angular app while avoiding rewrite of the Vue app and the additional maintenance of a new code base. There may be caveats that make this solution unworkable for your application, so use your best judgment. The Angular and Vue docs are an excellent resource for additional information on these topics and many more check them out here and here.

Works Cited

Angular Docs. (2019, September 19). Retrieved from

Butler, K. (2019, September 24). Vue Calculator. Retrieved from GitHub:

Papa, J. (2019, September 24). Angular Tour of Heroes. Retrieved from GitHub:

Vue CLI Docs. (2019, September 19). Retrieved from Vue CLI:

As promised in the previous blog post, here is a detailed explanation of how to connect to APIs secured in the Azure AD from SharePoint Framework (SPFx) web parts. Please read part I of this blog for a thorough understanding of the SharePoint Framework, comparing it with other models, and its constraints/disadvantages before diving in further.  

Connecting to APIs is essential functionality in today’s communication as it extends the versatile communication with outside data repositories. SharePoint web parts can render data not only from SharePoint lists and libraries but also from external repositories. The data can be owned by anyone outside the organization. The external repositories can be connected for data retrieval from SharePoint via API (Application Programming Interface) call. The external repositories can be on different platforms, domains, etc. SPFx comes with many namespaces to leverage the communication between the SPFx web part from SharePoint online with other repositories via API call.

Types of API Communications from SPFx

  • Connect to SharePoint APIs to access data residing in SharePoint (SPHttpClient with OData) 
  • This is access data residing in SharePoint lists/libraries 
  • Connect to Microsoft Graph (MSGraphClient through MSGraphClientFactory) 
  • This is access users and other user-related info from Azure Active Directory (AAD) 
  • Connect to enterprise APIs secured in Azure AD (enterprise APIs using AadHttpClient & aadHttpClientFactory) 
  • Connect to Azure API secured in Azure AD from the SharePoint Framework web part in single-tenant implementation, where both Azure and SharePoint online are under the same tenant. This blog covers the details of implementing this functionality. 
  • Connect to Azure API secured in Azure AD from the SharePoint Framework web part in multi-tenant implementation, where both Azure and SharePoint online are in different tenant’s 
  • Connect to anonymous APIs (using HttpClient to connect to public APIs for weather etc.) 
  • Anonymous API’s are used to access any weather and other publicly available API’s

Connect to Azure AD Secured APIs

Microsoft Web API Permissions

Figure Credit: Microsoft


If your environment is already set up, ensure you have the latest version of Yeoman SharePoint generator by entering: 

npm update – g @microsoft/generator-sharepoint@latest

Steps to Develop, Deploy, and Test SPFx Connecting to Function API Secured in Azure AD

Once all the pre-requisites are met, follow the steps below to develop, deploy, and test the SharePoint Framework connecting to Azure API secured in an Azure active directory. 

1. Create an Azure function (HttpTrigger) returning mock data

Create Azure Function

Figure: Create a new Azure function

New Function Created

Figure: New Azure function is created

2. Create a HttpTrigger & add C# code to return the list of orders

Create Http Trigger

Figure: C# Azure Function code to return the list of orders


3. Secure the azure function by enabling the authentication/authorization via Azure Active Directory (AAD) and create an app in AAD. Verify azure function works when called from the browser.

Secure Azure Function Screenshot

Figure: Configure authentication/authorization for the Azure function

4. Enable ‘App Service Authentication’

Authenticating App Services Screenshot

Figure: Selection Azure Active Directory for authentication/authorization

5. Active Directory authentication is set up & the API is secured in an Azure AD
Register Azure Function

Fig: Registered Azure function in Azure AD

6. Enable CORS (Cross-Origin Resource Sharing). Even though Azure & SharePoint Online are in the same tenant, they are in different domains.

Cross Origin Resource Sharing

Figure: Configure CORS

7. Add the SharePoint tenant URL.

Add SharePoint for CORS

    Figure: Add the SharePoint for CORS to authenticate the SharePoint site in Azure

8. Azure function API is secured in Azure AD & the application ID will be used in the SPFx web part.

Azure function installed on AD

Figure: Azure function is registered in the Azure AD as an Enterprise application

9. SharePoint online tenant/admin center in O365.

Available sites in SharePoint admin center

Figure: Available sites in the SharePoint admin center

10. Create an SPFx web part project to render the data by connecting to API secured in Azure AAD. Use Yeomen to generate a web part. Use this link for more information on generating web parts.

Yeoman generator to generate SPFx web part

Figure: Yeoman generator to generate SPFx web part

11. Add web API permission requests in config/package-solution.json file.

src\webparts\[webpartname]\config\package-solution.json – add two web api permission requests

 “webApiPermissionRequests”: [
“resource”: “contoso-api”,
“scope”: “user_impersonation”

“resource”: “Windows Azure Active Directory”,
“scope”: “User.Read”

SPFx web part configuration file

Figure: API permissions in SPFx web part configuration file

12. Import namespaces for enterprise API communication.

import AadHttpClient to connect with API in src\webparts\[webpartname]\[webpartname]WebPart.ts

import { AadHttpClient, HttpClientResponse } from ‘@microsoft/sp-http’;


namespaces for enterprise API communication


13. Build, package, and upload the package to the SharePoint App Catalog.

gulp bundle –ship && gulp package-solution –ship

gulp clean (for redeploying after updates)

14. Add the SPFx package to the tenant app catalog in your Office 365 tenant. SPFx deploys API related file to the SharePoint admin center.

SPFx deploys API related file to the SharePoint

15. Approve requested API permissions.

From the SharePoint admin center in Office 365/SharePoint, approve the API from API Management page

Once the API is approved, the SPFx web part can be added to the SharePoint site page

API permissions are available in SharePoint online admin center

Figure: API permissions are available in SharePoint online admin center

16. Create a new site page from the developers’ site and add the SPFx web part.

Add the web part to SharePoint page

Figure: Add the web part to the SharePoint page

17. If all goes well, your web part will be rendered with data that is served from the API call!
Rendered with Data

SPFx – Connect to APIs Gotchas

  • Connecting to API secured in Azure AD did not work via SPFx AadHttpClient & aadHttpClientFactory in SharePoint 2019 on-premise. The this.context.aadHttpClientFactory did not work. I choose a web part that is part of “SharePoint 2019 and SharePoint Online” when creating the SPFx web part via the Yeomen generator. Choose only ‘SharePoint Online’ to use AadHttpClient & aadHttpClientFactory
  • Microsoft example code did not work as it is. The azure function needed a slight tweak. Additional permission must be added “webApiPermissionRequests” in the config\package-solution.json


“resource”: “Windows Azure Active Directory”,

“scope”: “User.Read”


  • Single-tenant vs multiple tenant access
    • First, I set up azure function and secure it in AAD API in a personal MSDN subscription tenant and tried to connect from Developer (Free) O365/SharePoint Online SPFx web part. But the API permission in the SharePoint admin center could not get approved. The permission to access API from a different tenant did not work with the way Azure function API is configured in AAD.
    • To overcome that issue set up an Azure tenant under developer O365 and with the same credential. The API was able to get approved. More Info…
    • To overcome this, need to configure AAD API authentication to multi-tenant. More info…
  • gulp clean – important when re-deploying, otherwise new updates will not get updated
  • Debugging is quite important for troubleshooting 


Further ahead:

AIS is working with a large organization that wants to discover relationships between data and the business by iteratively integrating data from many sources into Azure Data Lake. The data will be analyzed by different groups within the organization to discover new factors that might affect the business. Data will then be published to the appropriate consumers using PowerBI.

In the initial phase, data lake ingests data from some of the Operational Systems. Eventually, data will be captured not only from all the organization’s systems but also from streaming data from IoT devices.  

Azure Data Lake 

Azure Data Lake allows us to store a vast amount of data of various types and structures. Data can be analyzed and transformed by Data Scientists and Data Engineers. 

The challenge with any data lake system is preventing it from becoming a data swamp. To establish an inventory of what is in a data lake, we capture the metadata such as origin, size, and content type during ingestion. We also have the Interface Control Document (ICD) from the Operational Systems that describe the data definition of the source data. 

Logical Zones

The data in the data lake is segregated into logical zones to allow logical and physical separation to keep the environment secure, organized and agile. As the data progress through the zones various transformation is performed. 

  • Landing Zone is a place where the original data files are stored untouched. No data is deleted from this zone, and access to this zone is limited.  
  • Raw Zone is a place where data quality validation is applied based on the rules defined in source ICDAny data filed validation moves to Error Zone. 
  • Curated Zone is a place where we store the cleansed and transformed data and ready for consumption. The transformation is done for different audiences, and within the Zone, folders will be created for each specialized change.  
  • Error Zone is a place where we store data that filed validation. A notification is sent to the registered data curators upon arriving new data.  
  • Metadata Zone is a place where we keep track of metadata of the source and the transformed data.Metadata Zone Organization

The source systems have security requirements that prevent access to sensitive data. When the folders are created, permissions are given to security groups in Azure Active Directory. The same security rules are applied to the subsequent folders.

Now that the data is in the data lake, we allow each consuming group to create their own transformation rules. The transformed data is then moved to the curated zone ready to be loaded to the Azure Data Warehouse.

Azure Data Factory

Azure Data Factory orchestrated the movement and transformation of data, as shown in the diagram below. When a file is dropped in the Landing Zone, the Azure Data Factory pipeline that consists of activities to Unzip, Validate, Transform, and Load the data into Data Warehouse.

The unzipping is performed by a custom code Azure Function activity rather than the copy activity’s decompress functionality. The out of box functionality of Azure Data Factory can be used to uncompressed only GZip, Deflate and BZip2 files but not Tar, Rar, 7Zip, Lzip.

The basic validation rules, such as data range, valid values, and reference data, are described in the ICD. A custom Azure Function activity was created to validate the incoming data.

Data is transformed using Spark activity in Azure Data Factory for each consuming user. Each consumer has a folder under the Curated Zone.

Data Processing Example

Tables in the Azure Data Warehouse were created based on the Curated zone by executing the Generate Azure Function activity to create data definition language (DDL). The script modifies the destination table if there is a new field added.

Finally, the data is copied to the destination tables to be used by end-users and warehouse designers.

In each step, we captured business, operational, and technical metadata to help us descript the data in the lake. The metadata information can be uploaded to a metadata management system in the future.

This blog is a follow on about Azure Cognitive Services, Microsoft’s offering for enabling artificial intelligence (AI) applications in daily life. The offering is a collection of AI services with capabilities around speech, vision, search, language, and decision.

Azure Personalizer is one of the services in the suit of Azure Cognitive Services, a cloud-based API service that allows you to choose the best experience to show to your users by learning from their real-time behavior. Azure Personalizer is based on cutting-edge technology and research in the areas of Reinforcement Learning, it uses a machine learning model that is different from traditional supervised and unsupervised learning models.

In Azure Cognitive Services Personalizer: Part One, we discussed the core concepts and architecture of Azure Personalizer Service, Feature Engineering, its relevance, and its importance.

In Part Two, we cover a couple of use cases in which Azure Personalizer Service is implemented. We looked at features used, reward calculation, and their test run result.

In this blog, Part Three, we list out recommendations and capacities for implementing solutions using Azure Personalizer Service.

Recommendations, Current Capacities, and Limits

This section describes some essential recommendations while implementing Personalizer, and current capacity factors for its use.


  • Personalizer starts with default learning policy which can yield moderate performance. As part of optimization, Evaluations are run that allows Personalizer to create new Learning Policies specifically optimized to a given use case. Optimized learning policies perform significantly better for each specific loop, generated during evaluation.
  • The reward score calculation should consider only relevant factors with appropriate weight. Experiment duration (Rank to Reward cycle) should be low enough that the reward score can be computed while it’s still relevant. How well-ranked results worked can be computed by business logic, by measuring a related aspect of the user behavior, and it is expressed in value between -1 and 1.
  • The context for the ranking items (actions) can be expressed as a dictionary of at least 5 features that you think would help make the right choice, and that doesn’t include personally identifiable information. Similarly, each item (action) should be expressed as a dictionary of at least 5 attributes or features that you think will help Personalizer make the right choice. There should be less than 50 actions (items) to rank per call.
  • Personalizer will adapt to continuous change in the real world, but results won’t be optimal if there are not enough events and data to learn from to discover and settle on new patterns. Data can be retained long enough to accumulate a history of at least 100,000 interactions.
  • You should choose a use case that happens often enough. Consider looking for use cases that happen at least 500 times per day. Context and actions have enough features defined to facilitate learning.
  • Your data retention settings allow Personalizer to collect enough data to perform offline evaluations and policy optimization. This is typically at least 50,000 data points.
  • Don’t use Personalizer where the personalized behavior isn’t something that can be discovered across all users but rather something that should be remembered for specific users or comes from a user-specific list of alternatives.
  • To prevent actions from being ranked, it can either be removed from the list while making Rank API call or use Inactive Events. To disable automatic learning, call Rank API with learningEnabled = False. Learning for an inactive event is implicitly activated if you send a reward for the Rank results.
  • Personalizer exploration setting of zero will negate many of the benefits of Personalizer. With this setting, Personalizer uses no user interactions to discover better user behavior. This leads to model stagnation, drift, and ultimately lower performance.
  • A setting that is too high will negate the benefits of learning from user behavior. Setting it to 100% implies constant randomization, and any learned behavior from users would not influence the outcome.
  • To realize the full potential of AI offerings, design and implementation should gain the full trust of end-users, aspects to consider include ethics, privacy, security, safety, inclusion, transparency, and accountability.

Capacity & Limits

  • How well the ranked-choice worked need to be measured with relevant user behavior and scored between -1 and 1 with single or multiple calls to Reward API.
  • Context and Actions (Items) have enough features (at least 5 features each) to facilitate learning. Fewer than 50 items (actions) to rank per single Rank call.
  • Retaining the data for long enough to accumulate a history of at least 100,000 interactions to perform effective offline evaluations and policy optimizations, typically at least 50,000 data points.
  • Personalizer supports features of data type string, numeric, and Boolean. Empty context is not supported, it should have at least one feature in the context.
  • For categorical features, pre-defining of the possible values or ranges is not required.
  • Features that are not available at the time of Rank call should be omitted instead of sent with a null value.
  • There can be hundreds of features defined for a use case, but they must be evaluated (using principles of Feature Engineering and Personalizer Evaluation option) for effectiveness, and less effective ones should be removed.
  • The features in the actions may or may not have a correlation with the features in the context used in Personalizer.
  • If the ‘Reward Wait Time’ expires, and there has been no reward information, a default reward is applied to that event for training. The maximum wait duration supported currently is 6 days.
  • Personalizer Service can return a rank very rapidly, and azure will auto-scale on need basis to maintain the rapid generation of ranking results. Throughput is calculated by adding the size of action and context JSON documents, and factor the rate of 20 MB / sec.
  • Context and Actions (items) are expressed as a JSON object that is sent with the Rank API call. JSON objects can include nested JSON objects and simple property/values. Arrays can be included if the items are numbers.


Azure Cognitive Services suite facilitates a broad range of AI implementations. It enables applying the benefits of AI technology in little things we do in our daily lives. Personalizer service is simple to use yet powerful AI service that can be applied in any scenario where the ranking of options is meaningful once it is expressed with a rich set of features. I hope this blogpost is helpful in explaining the high potential use of Azure Cognitive Services Personalizer Service. I also wanted to thank my colleague Kesav Chenna at AIS for his contribution in implementing Personalizer in the use cases discussed in this blog.