In this article, I will show you how we can create reusable custom components in canvas apps. We will work on two simple limitations on the current controls in Canvas Apps and create custom components to overcome those limitations.

1. A specific range cannot be set on the date picker control that would limit the users from selecting dates from a specific range only.

2. Power Apps currently limits the text input control to two formats i.e., text and number. We will create a custom component to support ‘regex’ as a general setting that can be applied to multiple input fields of the same format. Let’s get started. We will first create the components and then see how to use them on the canvas app.

Creating Custom Components

To create the components in the Power Apps app studio, we first have to enable ‘Components’ in the experimental features tab from the advanced settings of the app.

Creating Custom Components

Date Picker

The current date control allows you to restrict only the year while configuring the control. Using already available controls, we can design a component for the date control that enables restricting to a range of dates. The display of the date control component is based on a gallery and is explained in detail here. We will start by creating a few custom properties for the component. The idea behind this date control is to enable functionality to restrict users to select the date from a specified range. For this, I created a “start date” and an “end date” property of data-type “date and time”. I have also added properties to customize the color of the different sub-controls of the component.

Date Picker

1. This is a label that displays the date selected from the custom date picker control component. The expression used on the “Text” property of this label:
Selected Date Label

2. This is an icon (‘calendar’) to expand the date picker. The expression used on the “OnSelect” property of this control:

Icon Calendar

Explanation: The visibility of the calendar is set here based on the “CalendarVisibility” variable. “SelectedRange” is a variable that sets the context for the appropriate month to be displayed on the date picker.

3. Upon clicking this icon, the user is navigated to the previous month. The expression used on the “OnSelect” property of this control is:

Previous Month Navigation

Explanation: This evaluates the month label based on the current month on the date picker and the result is the previous month w.r.t to the current month. The expression used on the “DisplayMode” property of this control:

Display Mode

4. This is a label control and displays the selected month and year on the date picker.

5. Upon clicking this icon, the user is navigated to the next month based on the current month. The expression used on the “OnSelect” property of this control is:

On Select

Explanation: This evaluates the month label based on the current month on the date picker and the result is the next month w.r.t to the current month. The expression used on the “DisplayMode” property of this control:

Display Mode

6. This is a gallery of buttons to show days week wise in a row. The expression used on the “OnSelect” property of the button inside of the gallery is:

On Select

Explanation: The “dateselected” variable value is set by clicking this button. The expression used on the “DisplayMode” property of this button control is:

Expression used on DisplayMode

Explanation: Here is the date validation that checks if the date is within the range of the “start date” and “end date” properties and is of the currently selected month. The expression used on the “Text” property of the button control is: (This expression has been referred from this reference link.)

Date Validation

Explanation: This gets the date as per the day from the date picker. The expression used on the “Items” property of the gallery control is: (This expression has been referred from this reference link.)

Items property gallery control

Explanation: This created the 42 cells for the date picker inside of the gallery control. The expression used on the “TemplateFill” property of the gallery control is:

Template Fill Property

Explanation: This highlights the currently selected item (date) in the gallery control.

The selected date from this component can be accessed through the “SelectedDate” property. Note: The color properties defined are used in the component on all the individual controls to sync the color coordination amongst them.

Regex

The current text input control allows you to choose the input text format as a number or text only. We can design a component that allows implementing regex and based on the property (formatting required) the text input can be formatted or restrict the user to input the text in the type defined in the component. If the input text does not match the type chosen on the component, the field is reset, and a warning message is displayed to the user.

Regex

7. This is a text input control and the text format is set through the “InputFormat” custom property created on this component. The expression used on the “OnChange” property of this control is:

Setting Text Format

Explanation: The “if” statement checks the “InputFormat” custom property for the text input against the respective regex statement and resets the input if the regex is not matched. The expression used on the “Mode” property of this control is:

Expression on Mode Property

Explanation: If the “InputFormat” on this component is set to “Password” then the mode will be set to password in order to mask the user input.

8. This is a label that is used to display the warning message to the user in case of a mismatch of the input based on the “InputFormat” property.

9. This is an icon used to display the warning sign to the user in case of a mismatch of the input based on the “InputFormat property.

Email:

This is an “InputFormat” and the regex that this type of input is being compared to is:

Input Format Comparison

URL:

This is an “InputFormat” and the regex that this type of input is being compared to is:

URL Input Format

Password:

This is an “InputFormat” and the regex that this type of input is being compared to is:

Password Input Comparison

The password should include at least one or more of an upper case, lower case letters, a number, and a special character.

Number Format:

Number Format

10. This is a label that displays the converted value based on the formatting defined for the input number. The expression used on the “Text” property of this control is:

Display Converted Value based on formatting

Explanation: If the text input is not empty, then the numbers entered will be formatted as “XXX-XXXXXX”

11. This is a text input control and the user will enter the number in this control. The “Format” property of this input is set to “Number” Note: The label in 10 is positioned in a way that it overlaps the text input control in 11. That is how the user can see the input formatted as set in the label.

Using the Custom Components

Currently, Power Apps restricts the use of a component in galleries, data cards, data tables, etc. We will design a custom sign up form and use the components created above to register a user and save the information on the data source.

Custom Sign Up Form

12. This is a regular text input field that will allow users to enter simple text. The property of this control is set to “text”.

13. This is a component that we have created earlier and the property for this is set to “URL”. When users enter the URL, it is matched against the regex and if it is not in a valid URL format the field is reset.

14. This field expects the input to be in the form of an email address. The component created earlier is used for this control and the property is set to “Email”. If the input does not match against the regex for email, the field will be reset and a warning message will be displayed.

15. The property on the component for this is set to “Password”. If the input does not match as the regex set for the password (as mentioned next to the “Password” label), the field will be reset and the warning message will be displayed.

16. This input field expects the input in the form of a “Phone Number” and appends the ‘-’ to the numbers entered based on the regex configured while creating the component.

17. The custom property created for the component can be set here. For each of the above fields, the type of input (regex expression) is defined by setting this property.

Sign Up Form18. This is the date control component and allows the user to pick a date by clicking on the calendar icon.

19. The custom properties for the date control component can be set here. The start date and the end date are configured to set a range of allowable dates. As an added feature, the colors for the individual controls within the date picker can also be configured here.

20. This is a clear field button and upon clicking this all the fields are cleared and the user creates a new entry.

21. This is a submit button and the user can submit their details by filling the form and pressing this button. Upon pressing this button, the details filled by the user are patched to the respective columns in the data source.

22. This is a warning generated for an inappropriate email address format that was entered by the user. As soon as the user clicks outside of this control, the field is reset.

23. This is the expanded date picker control and the dates that can be selected by the user are based on the start and end dates range configured in the app. The dates that cannot be selected are highlighted by the strikethrough and the buttons to navigate to the next/ previous months is disabled.

In this article, we have seen how we can create custom reusable components in canvas apps with low code and no code methodology. Components can be created with already available controls to overcome certain limitations and enhance the overall application.

Note: This blog post is *not* about Kubernetes infrastructure API (an API to provision a Kubernetes cluster). Instead, this post focuses on the idea of Kubernetes as a common infrastructure layer across private and public clouds.

Kubernetes is, of course, well known as the leading open-source system for automating deployment and management of containerized applications. However, its uniform availability, is for the first time, giving customers a “common” infrastructure API across public and private cloud providers. Customers can take their containerized applications, Kubernetes configuration files, and for most parts, move to another cloud platform. All of this without sacrificing the use of cloud provider-specific capabilities, such as storage and networking, that are different across each cloud platform.

At this time, you are probably thinking about tools like Terraform and Pulumi that have focused on abstracting underlying cloud APIs. These tools have indeed enabled a provisioning language that spans across cloud providers. However, as we will see below, Kubernetes “common” construct goes a step further – rather than be limited to a statically defined set of APIs, Kubernetes extensibility allows or extends the API dynamically through the use plugins, described below.

Kubernetes Extensibility via Plugins

Kubernetes plugins are software components that extend and deeply integrate Kubernetes with new kinds of infrastructure resources. Plugins realize interfaces like CSI (Container Storage Interface). CSI defines an interface along with the minimum operational and packaging recommendations for a storage provider (SP) to implement a compatible plugin.

Another example of interfaces includes:

  • Container Network Interface (CNI) – Specifications and libraries for writing plug-ins to configure network connectivity for containers.
  • Container Runtime Interface (CRI) – Specifications and libraries for container runtimes to integrate with kubelet, an agent that runs on each Kubernetes node and is responsible for spawning the containers and maintaining their health.

Interfaces and compliant plug-ins have opened the flood gates to third-party plugins for Kubernetes, giving customers a whole range of options. Let us review a few examples of “common” infrastructure constructs.

Here is a high-level view of how a plugin works in the context of Kubernetes. Instead of modifying the Kubernetes code for each type of hardware or a cloud provider offered service, it’s left to the plugins to encapsulate the knowhow to interact with underlying hardware resources. A plugin can be deployed to a Kubernetes node as shown in the diagram below. It is the kubelet’s responsibility to advertise the capability offered by the plugin(s) to the Kubernetes API service.

Kubernetes Control Panel

“Common” Networking Construct

Consider a networking resource of type load balancer. As you would expect, provisioning a load balancer in Azure versus AWS is different.

Here is a CLI for provisioning ILB in Azure:

CLI for provisioning ILB in Azure

Likewise, here is a CLI for provisioning ILB in AWS:

CLI for provisioning ILB in AWS

Kubernetes, based on the network plugin model, gives us a “common” construct for provisioning the ILB that is independent of the cloud provider syntax.

apiVersion: V1

“Common” Storage Construct

Now let us consider a storage resource type. As you would expect, provisioning a storage volume in Azure versus Google is different.

Here is a CLI for provisioning a disk in Azure:

CLI for provisioning a disk in Azure

Here is a CLI for provisioning a persistent disk in Google:

CLI for provisioning a persistent disk in Google

Once again, under the plugin (device) model, Kubernetes gives us a “common” construct for provisioning storage that is independent of the cloud provider syntax.

In the example below, of “common” storage construct across cloud providers. In this example, a claim for a persistent volume of size 1Gi and access mode “ReadWriteOnce” is being made. Additionally, storage class “cloud-storage” is associated with the request. As we will see next, the persistent volume claims decouple us from the underlying storage mechanism.

cloud-storage-claim

The StorageClass determines which storage plugin gets invoked to support the persistent volume claim. In the first example below, StorageClass represents the Azure Disk plugin. In the second example below, StorageClass represents the Google Compute Engine (GCE) Persistent Disk.

StorageClass

StorageClass Diagram 2

“Common” Compute Construct

Finally, let us consider a compute resource type. As you would expect, provisioning a compute resource in Azure versus GCE is different.

Here is a CLI for provisioning a GPU VM in Azure:

CLI for provisioning a GPU VM in Azure

Here is a CLI for provisioning a GPU in Google Cloud:

CLI for provisioning a GPU in Google Cloud:

Once again, under the plugin (device) model, Kubernetes, gives us a “common” compute construct across cloud providers. In this example below, we are requesting a compute resource of type GPU. An underlying plugin (Nvidia) installed on the Kubernetes node is responsible for provisioning the requisite compute resource.

requesting a compute resource of type GPU

Source: https://docs.microsoft.com/en-us/azure/aks/gpu-cluster

Summary

As you can see from the examples discussed in this post, Kubernetes is becoming a “common” infrastructure API across private, public, hybrid, and multi-cloud setups. Even traditional “infrastructure as code” tools like Terraform are building on top of Kubernetes.

Problem

Let’s say you have a well-developed app written in Vue. It’s pretty big and has multiple teams working in the code base for both maintenance and feature development. Now, let’s say you need to include that app in an Angular app and you want to avoid transposing the Vue app to AngularThis recently occurred on one of our projects. The task was to incorporate a Vue.js component into an existing Angular app. Several hang-ups eventually lead to rewriting the Angular piece in Vue. The following walkthrough is to create a proof of concept to demonstrate that adding a Vue component to an Angular app is possible

Setup

I’ll be using a premade Vue Calculator to add to an Angular Tour of Heroes app. These should serve well enough as standins for the real-world problem. I’ll be using VSCode throughout, however, any text editor ought to do. 

Vue

The first step is to take inventory of the apps we’ll be using. To do that I’ll pull the calculator repo down locally and install all dependencies.  

Vue Formula

Now that the app is local and dependencies installed, I’ll check it out by serving the Vue app. 

Vue Formula 2

Then navigate to http://localhost:8080 in the browser.

Vue Calculator Snapshot

This step strictly speaking isn’t necessary, however, should something go wrong later on it’s good to know we at least started with functional components.

Now you can build this Vue app as a web component. To do this, I’ll be modifying the npm scripts located in package.json. Open package.json in VSCode and copy the build script. Paste it directly below the copied build script and change the name, I’ll be naming mine build-wc to signify that it builds a web component. Next, I’ll prepend some flags to the build-wc script.

Build WC Formula

The target flag tells the cli how to build the Vue app; I’ll use wc since we want to create a web component. Additional build target information can be found on the Vue CLI docs build targets page. The name flag will determine the output JavaScript file name in addition to the HTML tag I will use in angular where I want the calculator to appear. Finally, the last parameter is the component I want to use as the entry point. Since the Vue Calculator consists of only one component, you could use either App.vue or Calculator.vue as the final argument and have similar results. By using uCalculator.vue it illustrates how you can use a direct component instead of an entire app. Now that I have created the web component build script, I’ll run it and again inspect the results.

Refun the WC

I can now see a /dist directory with several files.

dist directory formula

I’ll install and use the npm package to serve this up and view the web component in the browser.

Install and use npm package

Now I can open the browser and navigate to http://127.0.0.1:8080/demo.html.

Vue Calc Demo

Opening the developer tools in Chrome and viewing the Elements tab reveals the custom <my-vue-calc></my-vue-calc> mentioned earlier; I’ll include the web component the same way when I get to the integration. For now, you can once again see the calculator working as normal. You may notice that the calculator looks slightly different from the first time the app was served. This is due to the app having been built as a web component with Calculator.vue as the entry point. This demonstrates how styling works within Vue apps. In App.vue, the calculator I am using has a style block with several styles added to the #app selector. However, since I built only the Calculator component of the larger app, I don’t get the styles from App.vue. For demo purposes, I’ll ignore the text alignment and font colors of the number keys. There are several gotchas when dealing with web-components so check out the Vue CLI docs for additional info.

Angular

Now that I am satisfied with the results of building the Vue web component I’ll pull down the Angular app to which I’ll be adding the calculator.

Angular App snapshot

And just like before, I’ll install dependencies.

Installing dependencies

I was having conflicts with the node-sass package and the version of node I had installed, v12.9.1. Using , I changed the current version of node to v11.15.0 which solved that problem. Once the app is cloned and dependencies are installed, I served the app, again, to ensure that it’s working prior to any changes I make. To accomplish this I ran the following in the root directory of the Tour of Heroes app.
ng serve snapshot

Integration

Now that the app is working I added the custom web component html tag to index.html.

Adding HTML web component

I’ve added the ‘TEST’ text as a sanity check. Since the web component isn’t defined yet, the tag defaults to the functionality of a <div>. Looking at the browser again there is now immediately following the <my-root></my-root>.
Test HTML Check in Vue

So far so good.

For organization purposes, I added a my-vue-calc directory nested in libs to hold the actual my-vue-calc.min.js code that I copied from the dist directory of the vue-calc app.
Add vue-calc app directory

In order to use the my-vue-calc component I need to load Vue.js in the Angular app. I accomplished this by installing vue as a dependency using npm.
Laod Vue to Angular App

Then adding vue to the angular.json configuration file. While I’m here I’ll also add the calculator to the scripts.
Angular Jason Configuration

Having modified angular.json you will likely have to stop and restart the server for the changes to be picked up and compiled. With that, the angular app should rebuild itself and refresh the webpage and the calculator should appear at the bottom.

Angular App Restart

Well… sort of… this is a very simplistic example and implemented in the simplest way. What if I want the web component in one of the angular components? Doing this would break up the source code a bit more and perhaps allow for lazy loading of the components. I like the idea of having a calculator on the individual hero detail pages. To achieve this, I’ll remove the custom element tag from index.html and move it to the bottom of app/hero-details.component.html. Navigating to a hero details page, right away a console error appears.

Navigating a Hero details page

The custom element should render a <div> with ‘TEST’ since I removed the web component JavaScript from the angular.json config file. To correct the error, I added the following to the @NgModule decorator.

Custom Elements formula snapshot

The CUSTOM_ELEMENTS_SCHEMA tells angular that you will be using elements that are neither Angular components nor directives in the app, check out the Angular Docs for additional info. The ‘TEST’ text now appears on the hero details page.
Test Text in hero details

The only thing left to do is import the web component JavaScript. In this case, I’ll just add an import statement to the hero-detail.component.ts.
Import Web component JavaScript

And like magic once more!

Test Text

In summary, the example in this walkthrough is absolutely contrived. I could have just as easily brought a native angular calculator app into the Tour of Heroes. However, there is no need for any calculator Vue or otherwise where I’ve added it. This walkthrough is to demonstrate the capability of adding a large well maintained Vue app to an Angular app while avoiding rewrite of the Vue app and the additional maintenance of a new code base. There may be caveats that make this solution unworkable for your application, so use your best judgment. The Angular and Vue docs are an excellent resource for additional information on these topics and many more check them out here and here.

Works Cited

Angular Docs. (2019, September 19). Retrieved from Angular.io: https://angular.io/docs

Butler, K. (2019, September 24). Vue Calculator. Retrieved from GitHub: https://github.com/kylbutlr/vue-calculator

Papa, J. (2019, September 24). Angular Tour of Heroes. Retrieved from GitHub: https://github.com/johnpapa/angular-tour-of-heroes

Vue CLI Docs. (2019, September 19). Retrieved from Vue CLI: https://cli.vuejs.org/guide/

In this post, I will show you how DevOps practices can add value to a variety of Office 365 development scenarios. The practices we will discuss are Infrastructure as Code, Continuous Integration, and Continuous Delivery. The advances in DevOps and SharePoint Framework (SPFx) have allowed us to make advancements in the way that we develop software and have improved our efficiency.

Infrastructure as Code (IaC)

Practicing IaC means that the infrastructure your applications depend on is created and maintained by code that is source controlled, tested, and deployed to production much like software. When discussing IaC, we’re typically talking about provisioning resources to a cloud provider. In our case, the “infrastructure” is Office 365 – a SaaS product with extensive customization and configuration options.  

While you could manage your O365 tenant with PowerShell, the code-centric and template-based PnP Provisioning Framework aligns better with this practice because: 

  1. Using the frameworks declarative XML syntax, you describe what you want to exist rather than writing code to manage how it gets created.
  2. It is easier for developers to run idempotent deployments to enact the desired state of your Office 365 tenant.  

While originally developed to support SharePoint Online and on-premise deployments, you can see in its latest schema that it has expanded to support Microsoft Teams, OneDrive, and Active Directory.  

Continuous Integration (CI) 

The practice of continuously integrating first means that your team has established the habit of frequently merging small batches of changes into a central code repository. Upon that merge, we automatically build and test the code to quickly identify bugs and quality issues.  

SharePoint Framework is a commonly used tool used to extend the capabilities of SharePoint Online and on-premise Much like the Provisioning Framework, SharePoint Framework is expanding to support other Office 365 services. You can currently use it to develop for Microsoft Teams and will soon be able to use it to develop Office Add-Ins.

Azure DevOps is a one-stop-shop service that provides everything you need throughout the software development lifecycle. For example, your team can version control your projects source code in Repos. After merging changes, use Pipelines to trigger a CI process that runs the build and test tasks of your SharePoint Framework solution.  

Continuous Delivery (CD)

Continuous Delivery, the practice of running automated deployments through a sequence of environments, starts after a completed CI process. Azure DevOps Pipelines will again be the tool of choice to execute the deployment procedures against each environment.

Example Solution

A solution demonstrating how to use the technologies and practices described above is available on Applied Information ScienceGitHub account. The result is a pipeline capable of receiving frequent changes to O365 configuration and SPFx applications from 1 or many developers, verifying the quality of the change, and deploying it to a series of environments.

Dev Tenant Diagram

I encourage you to explore the source code using the following summary as a guide. You’ll find the solution organized into three areas – SPFx, Provisioning, and Pipeline.

SPFx

A simple hello world web part was created using the yeoman generator. Jest was added to test the code. Npm and gulp scripts are used to build and package the source code which produces an sppkg file.

Provisioning

The PnP Provisioning Template XML file found here defines the desired state of the target tenant. The following is the desired state:

  1. Install the SPFx App into the tenant App Catalog.
  2. Create a site collection that will host our web parts page.
  3. Install the SPFx App to the Site Collection.
  4. Create a page that will host our web part.
  5. Add the web part to the page.

By using parameters for the tenant URL and site owner, the same template can be deployed to multiple environments. A PowerShell build script bundles the template and all required files, such as the SPFx sppkg file, into a single pnp file ready for deployment.

Pipeline

A multi-stage YAML pipeline defined in the Pipeline folder of the example solution runs the following process:

  1. Build, test, and package the SPFx and Provisioning Template source code.
  2. Deploy the prerequisite SharePoint infrastructure to the tenant if it does not already exist.
  3. Install and configure the SPFx web part.
  4. Repeat #2 and #3 for all environments.

Build Process Diagram

Secret variables, such as the username and password used to connect to the tenant, are only referenced in the pipeline. The values are set and encrypted in the Azure DevOps pipeline editor.

Variables Diagram with Passwords

Conclusion

In the not-too-distant past, it was high effort to write unit tests for a SharePoint solution, and most deployments were manual. In this post, I have shown you how advancements in the platform and tooling have changed this. The mentality, practices, and tools brought by DevOps can improve the pace and quality of any software development and infrastructure management project, including projects building upon Office 365.

As promised in the previous blog post, here is a detailed explanation of how to connect to APIs secured in the Azure AD from SharePoint Framework (SPFx) web parts. Please read part I of this blog for a thorough understanding of the SharePoint Framework, comparing it with other models, and its constraints/disadvantages before diving in further.  

Connecting to APIs is essential functionality in today’s communication as it extends the versatile communication with outside data repositories. SharePoint web parts can render data not only from SharePoint lists and libraries but also from external repositories. The data can be owned by anyone outside the organization. The external repositories can be connected for data retrieval from SharePoint via API (Application Programming Interface) call. The external repositories can be on different platforms, domains, etc. SPFx comes with many namespaces to leverage the communication between the SPFx web part from SharePoint online with other repositories via API call.

Types of API Communications from SPFx

  • Connect to SharePoint APIs to access data residing in SharePoint (SPHttpClient with OData) 
  • This is access data residing in SharePoint lists/libraries 
  • Connect to Microsoft Graph (MSGraphClient through MSGraphClientFactory) 
  • This is access users and other user-related info from Azure Active Directory (AAD) 
  • Connect to enterprise APIs secured in Azure AD (enterprise APIs using AadHttpClient & aadHttpClientFactory) 
  • Connect to Azure API secured in Azure AD from the SharePoint Framework web part in single-tenant implementation, where both Azure and SharePoint online are under the same tenant. This blog covers the details of implementing this functionality. 
  • Connect to Azure API secured in Azure AD from the SharePoint Framework web part in multi-tenant implementation, where both Azure and SharePoint online are in different tenant’s 
  • Connect to anonymous APIs (using HttpClient to connect to public APIs for weather etc.) 
  • Anonymous API’s are used to access any weather and other publicly available API’s

Connect to Azure AD Secured APIs

Microsoft Web API Permissions

Figure Credit: Microsoft

Pre-Requisites

If your environment is already set up, ensure you have the latest version of Yeoman SharePoint generator by entering: 

npm update – g @microsoft/generator-sharepoint@latest

Steps to Develop, Deploy, and Test SPFx Connecting to Function API Secured in Azure AD

Once all the pre-requisites are met, follow the steps below to develop, deploy, and test the SharePoint Framework connecting to Azure API secured in an Azure active directory. 

1. Create an Azure function (HttpTrigger) returning mock data

Create Azure Function

Figure: Create a new Azure function

New Function Created

Figure: New Azure function is created

2. Create a HttpTrigger & add C# code to return the list of orders

Create Http Trigger

Figure: C# Azure Function code to return the list of orders

DOWNLOAD THE AZURE FUNCTION CODE HERE

3. Secure the azure function by enabling the authentication/authorization via Azure Active Directory (AAD) and create an app in AAD. Verify azure function works when called from the browser.

Secure Azure Function Screenshot

Figure: Configure authentication/authorization for the Azure function

4. Enable ‘App Service Authentication’

Authenticating App Services Screenshot

Figure: Selection Azure Active Directory for authentication/authorization

5. Active Directory authentication is set up & the API is secured in an Azure AD
Register Azure Function

Fig: Registered Azure function in Azure AD

6. Enable CORS (Cross-Origin Resource Sharing). Even though Azure & SharePoint Online are in the same tenant, they are in different domains.

Cross Origin Resource Sharing

Figure: Configure CORS

7. Add the SharePoint tenant URL.

Add SharePoint for CORS

    Figure: Add the SharePoint for CORS to authenticate the SharePoint site in Azure

8. Azure function API is secured in Azure AD & the application ID will be used in the SPFx web part.


Azure function installed on AD

Figure: Azure function is registered in the Azure AD as an Enterprise application

9. SharePoint online tenant/admin center in O365.

Available sites in SharePoint admin center

Figure: Available sites in the SharePoint admin center

10. Create an SPFx web part project to render the data by connecting to API secured in Azure AAD. Use Yeomen to generate a web part. Use this link for more information on generating web parts.

Yeoman generator to generate SPFx web part

Figure: Yeoman generator to generate SPFx web part

11. Add web API permission requests in config/package-solution.json file.

src\webparts\[webpartname]\config\package-solution.json – add two web api permission requests

 “webApiPermissionRequests”: [
{
“resource”: “contoso-api”,
“scope”: “user_impersonation”
},

{
“resource”: “Windows Azure Active Directory”,
“scope”: “User.Read”
},
]

SPFx web part configuration file

Figure: API permissions in SPFx web part configuration file

12. Import namespaces for enterprise API communication.

import AadHttpClient to connect with API in src\webparts\[webpartname]\[webpartname]WebPart.ts

import { AadHttpClient, HttpClientResponse } from ‘@microsoft/sp-http’;

src\webparts\[webpartname]\[webpartname]WebPart.ts

namespaces for enterprise API communication

DOWNLOAD THE WEB PART CODE HERE

13. Build, package, and upload the package to the SharePoint App Catalog.

gulp bundle –ship && gulp package-solution –ship

gulp clean (for redeploying after updates)

14. Add the SPFx package to the tenant app catalog in your Office 365 tenant. SPFx deploys API related file to the SharePoint admin center.

SPFx deploys API related file to the SharePoint

15. Approve requested API permissions.

From the SharePoint admin center in Office 365/SharePoint, approve the API from API Management page

Once the API is approved, the SPFx web part can be added to the SharePoint site page


API permissions are available in SharePoint online admin center

Figure: API permissions are available in SharePoint online admin center

16. Create a new site page from the developers’ site and add the SPFx web part.

Add the web part to SharePoint page

Figure: Add the web part to the SharePoint page

17. If all goes well, your web part will be rendered with data that is served from the API call!
Rendered with Data

SPFx – Connect to APIs Gotchas

  • Connecting to API secured in Azure AD did not work via SPFx AadHttpClient & aadHttpClientFactory in SharePoint 2019 on-premise. The this.context.aadHttpClientFactory did not work. I choose a web part that is part of “SharePoint 2019 and SharePoint Online” when creating the SPFx web part via the Yeomen generator. Choose only ‘SharePoint Online’ to use AadHttpClient & aadHttpClientFactory
  • Microsoft example code did not work as it is. The azure function needed a slight tweak. Additional permission must be added “webApiPermissionRequests” in the config\package-solution.json

{

“resource”: “Windows Azure Active Directory”,

“scope”: “User.Read”

}

  • Single-tenant vs multiple tenant access
    • First, I set up azure function and secure it in AAD API in a personal MSDN subscription tenant and tried to connect from Developer (Free) O365/SharePoint Online SPFx web part. But the API permission in the SharePoint admin center could not get approved. The permission to access API from a different tenant did not work with the way Azure function API is configured in AAD.
    • To overcome that issue set up an Azure tenant under developer O365 and with the same credential. The API was able to get approved. More Info…
    • To overcome this, need to configure AAD API authentication to multi-tenant. More info…
  • gulp clean – important when re-deploying, otherwise new updates will not get updated
  • Debugging is quite important for troubleshooting 

TRY IT YOURSELF! DOWNLOAD THE CODE TO GET STARTED.

Further ahead:

AIS is working with a large organization that wants to discover relationships between data and the business by iteratively integrating data from many sources into Azure Data Lake. The data will be analyzed by different groups within the organization to discover new factors that might affect the business. Data will then be published to the appropriate consumers using PowerBI.

In the initial phase, data lake ingests data from some of the Operational Systems. Eventually, data will be captured not only from all the organization’s systems but also from streaming data from IoT devices.  

Azure Data Lake 

Azure Data Lake allows us to store a vast amount of data of various types and structures. Data can be analyzed and transformed by Data Scientists and Data Engineers. 

The challenge with any data lake system is preventing it from becoming a data swamp. To establish an inventory of what is in a data lake, we capture the metadata such as origin, size, and content type during ingestion. We also have the Interface Control Document (ICD) from the Operational Systems that describe the data definition of the source data. 

Logical Zones

The data in the data lake is segregated into logical zones to allow logical and physical separation to keep the environment secure, organized and agile. As the data progress through the zones various transformation is performed. 

  • Landing Zone is a place where the original data files are stored untouched. No data is deleted from this zone, and access to this zone is limited.  
  • Raw Zone is a place where data quality validation is applied based on the rules defined in source ICDAny data filed validation moves to Error Zone. 
  • Curated Zone is a place where we store the cleansed and transformed data and ready for consumption. The transformation is done for different audiences, and within the Zone, folders will be created for each specialized change.  
  • Error Zone is a place where we store data that filed validation. A notification is sent to the registered data curators upon arriving new data.  
  • Metadata Zone is a place where we keep track of metadata of the source and the transformed data.Metadata Zone Organization

The source systems have security requirements that prevent access to sensitive data. When the folders are created, permissions are given to security groups in Azure Active Directory. The same security rules are applied to the subsequent folders.

Now that the data is in the data lake, we allow each consuming group to create their own transformation rules. The transformed data is then moved to the curated zone ready to be loaded to the Azure Data Warehouse.

Azure Data Factory

Azure Data Factory orchestrated the movement and transformation of data, as shown in the diagram below. When a file is dropped in the Landing Zone, the Azure Data Factory pipeline that consists of activities to Unzip, Validate, Transform, and Load the data into Data Warehouse.

The unzipping is performed by a custom code Azure Function activity rather than the copy activity’s decompress functionality. The out of box functionality of Azure Data Factory can be used to uncompressed only GZip, Deflate and BZip2 files but not Tar, Rar, 7Zip, Lzip.

The basic validation rules, such as data range, valid values, and reference data, are described in the ICD. A custom Azure Function activity was created to validate the incoming data.

Data is transformed using Spark activity in Azure Data Factory for each consuming user. Each consumer has a folder under the Curated Zone.

Data Processing Example

Tables in the Azure Data Warehouse were created based on the Curated zone by executing the Generate Azure Function activity to create data definition language (DDL). The script modifies the destination table if there is a new field added.

Finally, the data is copied to the destination tables to be used by end-users and warehouse designers.

In each step, we captured business, operational, and technical metadata to help us descript the data in the lake. The metadata information can be uploaded to a metadata management system in the future.

This blog is a follow on about Azure Cognitive Services, Microsoft’s offering for enabling artificial intelligence (AI) applications in daily life. The offering is a collection of AI services with capabilities around speech, vision, search, language, and decision.

Azure Personalizer is one of the services in the suit of Azure Cognitive Services, a cloud-based API service that allows you to choose the best experience to show to your users by learning from their real-time behavior. Azure Personalizer is based on cutting-edge technology and research in the areas of Reinforcement Learning, it uses a machine learning model that is different from traditional supervised and unsupervised learning models.

In Azure Cognitive Services Personalizer: Part One, we discussed the core concepts and architecture of Azure Personalizer Service, Feature Engineering, its relevance, and its importance.

In Part Two, we cover a couple of use cases in which Azure Personalizer Service is implemented. We looked at features used, reward calculation, and their test run result.

In this blog, Part Three, we list out recommendations and capacities for implementing solutions using Azure Personalizer Service.

Recommendations, Current Capacities, and Limits

This section describes some essential recommendations while implementing Personalizer, and current capacity factors for its use.

Recommendations

  • Personalizer starts with default learning policy which can yield moderate performance. As part of optimization, Evaluations are run that allows Personalizer to create new Learning Policies specifically optimized to a given use case. Optimized learning policies perform significantly better for each specific loop, generated during evaluation.
  • The reward score calculation should consider only relevant factors with appropriate weight. Experiment duration (Rank to Reward cycle) should be low enough that the reward score can be computed while it’s still relevant. How well-ranked results worked can be computed by business logic, by measuring a related aspect of the user behavior, and it is expressed in value between -1 and 1.
  • The context for the ranking items (actions) can be expressed as a dictionary of at least 5 features that you think would help make the right choice, and that doesn’t include personally identifiable information. Similarly, each item (action) should be expressed as a dictionary of at least 5 attributes or features that you think will help Personalizer make the right choice. There should be less than 50 actions (items) to rank per call.
  • Personalizer will adapt to continuous change in the real world, but results won’t be optimal if there are not enough events and data to learn from to discover and settle on new patterns. Data can be retained long enough to accumulate a history of at least 100,000 interactions.
  • You should choose a use case that happens often enough. Consider looking for use cases that happen at least 500 times per day. Context and actions have enough features defined to facilitate learning.
  • Your data retention settings allow Personalizer to collect enough data to perform offline evaluations and policy optimization. This is typically at least 50,000 data points.
  • Don’t use Personalizer where the personalized behavior isn’t something that can be discovered across all users but rather something that should be remembered for specific users or comes from a user-specific list of alternatives.
  • To prevent actions from being ranked, it can either be removed from the list while making Rank API call or use Inactive Events. To disable automatic learning, call Rank API with learningEnabled = False. Learning for an inactive event is implicitly activated if you send a reward for the Rank results.
  • Personalizer exploration setting of zero will negate many of the benefits of Personalizer. With this setting, Personalizer uses no user interactions to discover better user behavior. This leads to model stagnation, drift, and ultimately lower performance.
  • A setting that is too high will negate the benefits of learning from user behavior. Setting it to 100% implies constant randomization, and any learned behavior from users would not influence the outcome.
  • To realize the full potential of AI offerings, design and implementation should gain the full trust of end-users, aspects to consider include ethics, privacy, security, safety, inclusion, transparency, and accountability.

Capacity & Limits

  • How well the ranked-choice worked need to be measured with relevant user behavior and scored between -1 and 1 with single or multiple calls to Reward API.
  • Context and Actions (Items) have enough features (at least 5 features each) to facilitate learning. Fewer than 50 items (actions) to rank per single Rank call.
  • Retaining the data for long enough to accumulate a history of at least 100,000 interactions to perform effective offline evaluations and policy optimizations, typically at least 50,000 data points.
  • Personalizer supports features of data type string, numeric, and Boolean. Empty context is not supported, it should have at least one feature in the context.
  • For categorical features, pre-defining of the possible values or ranges is not required.
  • Features that are not available at the time of Rank call should be omitted instead of sent with a null value.
  • There can be hundreds of features defined for a use case, but they must be evaluated (using principles of Feature Engineering and Personalizer Evaluation option) for effectiveness, and less effective ones should be removed.
  • The features in the actions may or may not have a correlation with the features in the context used in Personalizer.
  • If the ‘Reward Wait Time’ expires, and there has been no reward information, a default reward is applied to that event for training. The maximum wait duration supported currently is 6 days.
  • Personalizer Service can return a rank very rapidly, and azure will auto-scale on need basis to maintain the rapid generation of ranking results. Throughput is calculated by adding the size of action and context JSON documents, and factor the rate of 20 MB / sec.
  • Context and Actions (items) are expressed as a JSON object that is sent with the Rank API call. JSON objects can include nested JSON objects and simple property/values. Arrays can be included if the items are numbers.

Conclusion

Azure Cognitive Services suite facilitates a broad range of AI implementations. It enables applying the benefits of AI technology in little things we do in our daily lives. Personalizer service is simple to use yet powerful AI service that can be applied in any scenario where the ranking of options is meaningful once it is expressed with a rich set of features. I hope this blogpost is helpful in explaining the high potential use of Azure Cognitive Services Personalizer Service. I also wanted to thank my colleague Kesav Chenna at AIS for his contribution in implementing Personalizer in the use cases discussed in this blog.

References

When Microsoft introduced pipelines as part of their Azure DevOps cloud service offering, we received the tools to add continuous integration (CI) and continuous delivery (CD) practices to our development processes. An Azure DevOps pipeline can be created in two ways: 1) The current generally available “classic” pipeline tooling, and 2) the new multi-stage YAML pipeline feature which is currently in preview.

Classic Pipelines

Classic pipelines achieve CI through Azure DevOps build pipelines. A build pipeline executes before a developer integrates code changes into a code base. The pipeline does things like execute a build task, run the unit tests and/or run static code analysis. It then either accepts or rejects the new changes based on the outcome of these tasks.

CD is achieved through Azure DevOps release pipelines. After the build pipeline has produced a build artifact, a release pipeline will publish the artifact to various environments for manual functional testing, user experience testing and quality assurance. When testers have thoroughly tested the deployed artifacts, the release pipeline can then push the artifacts to a production environment.

As powerful as these classic CI/CD pipelines are, they do have their drawbacks. Firstly, the tooling for creating build and release pipelines does not provide a unified experience. CI pipelines provide an intuitive GUI to create and visualize the integration steps…

Classic build pipeline editor

… and also allow you to define those very same steps in YAML:

YAML build pipeline editor

Release pipelines also provide a GUI to create and visualize the pipeline steps. The problem is that the interface is different from that of the build pipeline and does not allow YAML definitions:

Classic release pipeline editor

Multi-stage Pipelines

To resolve these discrepancies Microsoft introduced multi-stage pipelines. Currently in preview, these pipelines allow an engineer to define a build, release or a combined build and release pipeline as a single YAML document. Besides the obvious benefits gained through a unified development experience, there are many other good reasons to choose YAML over classic pipelines for both your builds and releases.

Since you commit YAML definitions directly to source control, you get the same benefits source control has been providing developers for decades. Here are the top 10 reasons (in no particular order) you should choose YAML for your next Azure DevOps pipeline:

1. History

Want to see what your pipeline looked like last month before you moved your connection strings to Azure KeyVault? No problem! Source control allows you to see every change ever make to your pipeline since the beginning of time.

2. Diff

Have you ever discovered an issue with your build but not known exactly when it started failing or why? Having the ability to compare the failing definition with the last known working definition can greatly reduce the recovery time.

3. Blame

Similarly, it can be useful to see who committed the bug that caused the failure and who approved the pull request. You can pull these team members into discussions on how best to fix the issue while ensuring that the original objectives are met.

4. Work Items

Having the ability to see what was changed is one thing but seeing why it was changed is another. By attaching a user story or task to each pipeline commit, you don’t need to remember the thought process that went into a particular change.

5. Rollback

If you discover that the pipeline change you committed last night caused a bad QA environment configuration, simply rollback to the last known working version. You’ll have your QA environment back up in minutes.

6. Everything As Code

Having your application, infrastructure and now build and release pipelines as code in the same source control repository gives you a complete snapshot of your system at any point in the past. By getting an older version of your repo, you can easily spin up an identical environment, execute the exact same pipelines and deploy the same code exactly as it was. This is an extremely powerful capability!

7. Reuse and Sharing

Sharing or duplicating a pipeline (or part thereof) is as simple as copy and paste. It’s just text so you can even email it to a colleague if desired.

8. Multiple Engineers

Modern CI/CD pipelines can be large and complex, and more than one engineer might modify the same YAML file, causing a conflict. Source control platforms solved this problem long ago and provide easy to use tools for merging conflicting changes. For better or worse, YAML definitions allow multiple engineers to work on the same file at the same time.

9. Peer Reviews

If application code peer reviews are important, so are pipeline peer reviews. The ability to submit a pull request before bringing in new changes allows team members to weigh in and provides an added level of assurance that the changes will perform as desired.

10. Branching

Have a crazy idea you want to try out? Create a new branch for it and trigger a pipeline execution from that branch. If your idea doesn’t pan out, simply delete the branch. No harm done.

Though still in preview, the introduction of fully text-based pipeline definitions that can be committed to source control provides benefits that cannot be achieved with classic GUI-based definitions, especially for larger organizations. Be sure to consider YAML for your next Azure DevOps pipeline implementation.

K.C. Jones-Evans, a User Experience Developer, Josh Lathery, a Full Stack Developer, and Sara Darrah, a User Experience Specialist, sat down recently to talk through our design and project development planning process that we implemented for part of a project. This exercise was to help improve our overall project development planning and create best practices moving forward. We coined the term the “Design Huddle” to describe the process of taking the feature from a high level (often one sentence request from our customer) to a working product in our software, improving project planning for software development.

We started the design huddle because the contract we were working on already had a software process that did not include User Experience (UX). We knew we needed to include UX, but weren’t sure how it would work given the fast paced (2-week sprint cycles) software process we were contractually obligated to follow. We needed to come up with a software design planning solution that allowed us to work efficiently and cohesively. The Design Huddle allowed us to do just that.

What does the design huddle mean to you?

Josh: Previously the technical lead would have all the design processes worked out prior to being assigned a ticket for development. The huddle meant that I had more ownership in the feature upfront. It was nice to understand via the design process what the product would be used for and why.

K.C.: The huddle was an opportunity to get our thoughts together on the full product before diving into development right away. In the past, we have developed too quickly and discovered major issues. Development early equated to too much ownership in the code, so changes were painful when something needed to be corrected.

Sara: The huddle for me meant the opportunity to meet with the developers early to get on the same page prior to development. That way when we had the final product, we could discuss details and make changes, but we were coming from the same starting point. I have been on other projects where I’m not brought in until after the development is finished- which

immediately strains the relationship between UX and Development due to big changes needed to finished code.

What does the design huddle look like?

  • The Team: UX Designer, at least one front-end or UX developer, a full-stack developer, the software tester, a graphic designer (as needed), and a Subject Matter Experts (as needed)
  • The Meeting: This took time to work out. As with any group, sometimes there were louder or more passionate individuals that seemed to overshadow the rest. At the end of the day the group worked better with order and consensus:
    • Agendas were key: The UX lead created the agendas for our meetings. Without an agenda it was too easy to go down a rabbit hole of code details. This also helped folks who were spread across multiple projects focus on the task at hand faster. We included time to report on action items, old business/review of any design items and set the stage of what you hope to cover as new design work.
    • Action Items: Create and assign actions to maintain in the task management system (JIRA). This was a good translation for developers and helped everyone understand their responsibility leaving the room. These also really helped with sprint planning and the ability to scope tasking.
    • The facilitator had to be assertive: Yes, we are all professionals and in the ideal world we could “King Arthur Round Table” these huddles. But the few times we tried this meeting were quickly off-track and down a rabbit hole. Many teammates would leave frustrated and feeling like we hadn’t made any progress. The facilitator was the UX specialist for our meetings, but we think the owner of the feature should facilitate. The facilitator needs to be willing to assert themselves during conversations, keep the meeting on track and force topics to be tabled for another time when needed.
    • Include everyone and know the crowd: The facilitator needs to quickly understand the team they are working with to figure out how to include everyone. One way we ensured this happen was to do an around the room at the end of each meeting.
    • Visual Aid that the whole team can see during the meeting: Choose the tool that works best for the topic at hand- a dry erase board, a wireframe, JIRA tickets, or a mock-up can help people stay on track and ensure common understanding.
    • Table tough items and know when to end the meeting: sometimes in the larger meetings we needed to call it quits on a debate to give more time for thinking, research, and discussions. Any member of the team could ask for something to be tabled. A tabled item was given a smaller group of individuals to work through the details in between the regular design huddle sessions.
    • Choose the team participants wisely: The first few meetings most likely will involve the entire team, but smaller huddles (team of 2 or 3) can often work through details of tabled items more efficiently.

What is the benefit?

  • Everyone felt ownership in the product by the end of the design. Sara’s favorite thing about this learning experience was when one of the developers I hadn’t worked with before said he loved the process. He had the opportunity to provide input early and often, and then by the time development started there weren’t really any questions left on how to implement. Josh- the process helped me feel like an engineer and not just a code monkey. K.C.-: In other projects, we have been handed mock-ups without context. This process helped the “gray area” be taken away.
  • Developers were able to tame the ideals of the UX designer by understanding the system, not just the User Interface. Developers could assist UX by asking questions, helping understand the existing system limitations, and raising concerns that the design solution was too complicated given the time we had.
  • The software tester was able to understand the flow of the new designs, ask questions and assist in writing acceptance criteria that was specific, measurable, attainable, realistic and testable (SMART).
  • She provided guidance when we needed to go from concept to reality and ensured we understood the design requirements.
  • It was critical to designing 1-2 sprints in front of expected development as part of Agile software development. It allowed for the best design to be used rather than forcing ourselves into something that could be completed in two weeks. Together we would know what our end goal was, then break down the concept into tiers. Each tier would have a viable product that was one sprint long, and always kept the end product design in mind.

The Design Huddle is a way for teams to collaborate early on a new feature or application. We feel it is a great way to work User Experience into the Agile software process and simplify project development planning. We have taken our lessons learned and applied to a new project the three of us are getting to tackle together and expanded the concept of the huddle to different members on the team. If you are struggling to incorporate proper design or feel frustration from teammates on an application task, this may be the software design planning solution for you!