As application developers, it’s our responsibility to ensure that the applications we create are using credentials and other secret configuration values in a secure way. Oftentimes, this task is overlooked in the pursuit of our primary concern: building new features and delivering business value quickly. In some cases, this translates into developers tolerating flat-out unsafe practices in the name of convenience, such as hardcoding secrets into the application source code or sharing secrets with team members via insecure communication channels and storing them on their development machines.

Fortunately, Microsoft provides a solution to this problem that should be attractive to both security experts and developers, known as “managed identities for Azure resources” (formerly “Managed Service Identities”). The idea is pretty simple: associate an Azure AD security principal* to your Asp.Net Core Web App and let it use this ‘identity’ to authenticate to Azure Key Vault and pull secrets into memory at runtime. Microsoft provides the glue to make all of this easy for developers: on the programming side, they provide a simple library for your Asp.Net Core app to pull the secrets from Key Vault (as demonstrated here), and on the hosting side they implement the mechanisms that make the identity available to the app’s runtime via first-class support in Azure hosting environments and local development tools.

For those unfamiliar, a security principal is any kind of digital ‘identity’ that can be authenticated and configured with permissions that authorize it to access Azure resources. Examples include a user’s personal login, an AD group, or a service principal. This is also known as App Registrations in Azure. They allow you to create a custom identity and credentials just for applications or other automated processes, so they can be granted access to Azure resources they need to interact with.

So, what’s to be gained with this approach, and what are the tradeoffs? There are two audiences that have a stake in this:

  • The business stakeholders and security team that place a high priority on protecting applications and user data from exposure
  • The developers that just want to make sure they can stay productive and spend less time worrying about how configuration values are provided. I’ll address these groups and their distinct concerns separately.

The Security Perspective

There are numerous security benefits that come with this approach. Most critically, there are far fewer points of exposure for your secrets. The reliance on developers to do the right thing and manage secrets responsibly is almost entirely removed, to the point where developers would have to go out of their way to do the wrong thing. Another benefit is the administrative access control built into Key Vault, which makes it easy to manage who should and shouldn’t be able to run the app and access secrets.

We will start with how this approach limits the exposure of your secrets. Without managed identity and Asp.Net Core Key Vault configuration, you are directly responsible for making your secrets available to your app, whether it’s hosted or running locally. A hosted app, for example, one running in Azure App Service, means configuring the PaaS App Settings or modifying the appsettings.json file that you deploy with your app binaries. The secrets must be put there by the process that regularly builds and deploys your application. It also needs to store and retrieve these secrets from somewhere, which could be Key Vault, a release variable, or some other data store, maybe even just a VM or user’s file system. Local development also spreads the surface area of secret exposure. In the best case, you might pull these onto the developer’s machine using a script that stores them in the dev’s file system, but too often people will take the path of least resistance and send these to each other over email, chat, or, even worse, hardcode them into source control.

In a managed identity world, the app simply reaches out to Key Vault for these secrets at runtime. This trims out several problematic points of exposure:

  1. No more accessing these credentials from the deployment pipeline where they might accidentally get captured in logs and build artifacts, and where they may be visible to those with permission to manage deployments.
  2. If a person is tasked with running your deployment scripts directly (to be clear – not ideal) they wouldn’t need access to app secrets to do a code deployment.
  3. No more storing these credentials in persistent storage of the app runtime host, where they can be inspected by anyone with management access to the host.
  4. No more spreading secrets across developer’s local devices, and no more insecure transmission of secrets on channels such as email or chat. It also makes it easy to avoid bad habits like hardcoding secrets into the app and checking them into source control.

Another benefit of this approach is that it doesn’t rely so heavily on developers and operations folks being mindful and responsible about security. Not only can they avoid insecurely distributing them amongst teammates, but they also don’t have to worry about removing them from their local machines or VM’s when they no longer need them, because they are never stored. Of course, developers always should be mindful and responsible for security, realistically things don’t always work out that way. Developers frequently overlook security concerns while focusing on being productive, and often people are simply under-educated about security. Any opportunity to improve security via architecture and design, and to make humans less capable of doing the wrong thing is a win.

Those with a focus on security will also appreciate the level of access control that is provided by Key Vault. Access to secrets is not managed via typical Azure RBAC (Resource Based Access Control). Instead, access policies are created to grant specific permissions for each user, service principal, or group. You can grant specific kinds of access, such as reading or editing/adding secrets. This can make Key Vault serve as a control center for deciding who should be allowed to run the app for a given environment. Adding a new team member or granting temporary access to debug a higher environment is as easy as adding a user to a Key Vault access policy that allows reading secrets only, and revoking access is as easy as removing them. See here for more info on securing access to Key Vault.

The Developer Perspective

Developers may have concerns that a centralized configuration approach could slow things down, but let’s look at why that doesn’t have to be the case, and why this can even improve velocity in many cases. As we’ll see, this can make it super easy to onboard new team members, debug multiple environments, regenerate keys due to recycling or resource recreation, and implement a deployment process.

We will start with onboarding. With your app configured to use managed identity and key vault authentication, onboarding a new team member to run and debug the app locally simply involves adding them to an access policy granting them the permission to read keys from the key vault. An even easier approach is to create an AD group for your developers and assign a single Key Vault access policy to the entire group. After that, they just need to login to the subscription from their personal machine using Visual Studio or the Azure CLI. Visual Studio has this support integrated and will apply when you start your app from there, and the Azure CLI extends this support to any other IDE that runs the app using the dotnet CLI, such as VS Code. Once they have been granted authorization and logged in, they can simply start the app, which will retrieve the secrets from Key Vault using their permissions. If this team member were to eventually leave the team, they can have their access revoked by removing their access policy. They will have nothing to clean up because the secrets were never stored on their computers, they only lived in the app runtime memory.

Another benefit of centralizing secrets when using shared resources is in situations where secrets may often change. Particularly in a development environment, you may have good reason to delete resources and redeploy them, for example, to test an infrastructure deployment process. When you do this, secrets and connection strings for your resources will have changed. If every developer had their own copy of the secrets on the machine, this kind of change would have broken everyone’s local environments and disrupt their work until they’ve acquired all the latest secrets. In a managed identity scenario, this kind of change would be seamless. The same benefit applies when new resources are added to your infrastructure. Dev team members don’t need to acquire the new connection secrets when they get the latest code that uses a new service, the app will just pull them from Key Vault.

Another time secrets may change is when they expire or when you intentionally rotate them for the sake of security. Using a key vault can make it significantly easier to implement a key rotation strategy. The key vault configuration provider can be configured to pull app secrets once at app start time (which is the default) or at a regular interval. Both can be part of a secret/key rotation strategy, but the first requires orchestrating an app restart after changing a secret, which isn’t necessary with the second approach. Implementing key rotation support in your app is fairly straight forward: most Azure resources provide two valid keys at a time to support rotation. You should store both keys for each service in Key Vault, but only use one of them in your app until it becomes invalid. Once your client hits an auth error, you should catch that exception, set the other key as the actively used key, and replay the request. Using approach 2, configure the Key Vault config provider to refresh on an interval, maybe 5 or 10 minutes, and then have an external process (Azure Automation Runbooks are a recommended solution for this) reset only one key at a time. If both keys are cycled at the same time, your app config won’t refresh fast enough to get the new keys and will start to fail. By doing one at a time, you ensure having at least one valid key available to your app at any given time.

Another way that this can improve developer agility is that you can easily change the environment you target with a simple configuration change. For example, let’s say some pesky issue is popping up in your UAT environment that isn’t showing up anywhere else, and you’re tearing out your hair looking through logs trying to understand it. You’re at the point where you’d give your left foot to just run the app locally targeting that environment so you can attach a debugger and step through the problematic code. Without using managed identity and the key vault configuration provider you would have to copy the secrets for that environment to your local computer. This is gross enough that you should probably seek any other option before resorting to it. However, if you were using managed identity and key vault, you could simply reconfigure the name of the key vault you want your local app to use with the one for the target environment and create a temporary access policy for yourself. As a good practice, you should still revoke your access afterward, but at least you have nothing sensitive on your local device to clean up.

Finally, let’s talk about the benefits of using this approach from the perspective of building a deployment pipeline. Specifically, the benefit is that you have one fewer thing to implement and worry about. Since secrets are centralized in the key vault and pulled during app runtime, you don’t need to have your process pull in the secrets from wherever you store them, then pave them into an appsettings.json file, or assign them as PaaS-level environment variables. This saves you time not having to code this behavior, and it also saves you time when something breaks because there’s one fewer place where something could have gone wrong. Having your app go directly to key vault streamlines the configuration and creates fewer opportunities to break things. It also has the advantage that you don’t need to run a full app deployment just to update a secret.

Counter Arguments

This may sound good so far, but I suspect you may already have a few concerns brewing. Maybe you’re thinking some of the following: Do I have to start keeping all my configuration values in Key Vault? Doesn’t this represent additional configuration management overhead? Won’t I have conflicts with other team members if I need to change secret values to personalize my local environment? Doesn’t this create a hard dependency on an internet connection, meaning I won’t be able to run a local environment fully offline? All of these are valid questions, but I think you’ll see that they all have acceptable and satisfying answers.

So, does this mean that Key Vault needs to become the singular place for all app configurations, both secret and non-secret? If we only put secrets there, then don’t many of the above arguments about the benefits of centralization become moot, since we still need to do distributed config management for non-secret values? Azure’s answer to this question is Azure App Configuration, a centralized app configuration service that gives you a nice level of control over non-secret configuration settings for your app, including cool features like config value versioning and feature flags. I won’t go too deep into the details of this service here, but it’s worth noting that it also supports managed identity and can integrate with your app in the same way as Key Vault. However, I’ll also offer the suggestion that you can incorporate App Configuration on an as-needed basis. If you are dealing with a small app with less than 10 environment-specific settings, then you might enjoy the convenience of just consolidating all your secret and non-secret values into Key Vault. The choice comes down to preference, but keep in mind that if your settings are changing semi-often or you expect your app to continue adding new config settings, you may get tired of editing every config using Key Vault’s interface. It’s tailored for security, so it should generally be locked down as much as possible. It also doesn’t have all the features that App Configuration does.

Regarding configuration management overhead, the fact is that, yes, this does require creating/managing a Key Vault service and managing access policies for dev team members. This may sound like work you didn’t previously have, but I assure you this kind of setup and ownership is lightweight work that’s well worth the cost. Consider all the other complexities you get to give up in exchange: with centralized config management, you can now do code-only app deployments that can ignore configuration management entirely. That makes it faster and easier to create your deployment process, especially when you have multiple environments to target, and will give you high marks for security. As we also mentioned, centralizing these config settings makes it simpler to onboard new team members and possibly to iterate on shared infrastructure without breaking things for the team.

You may also be concerned that sharing your configuration source will result in a lot of stepping on toes with your team during development. But consider this: nothing is stopping you from using the same kind of local environment configuration approaches that developers already use in addition to Key Vault. Asp.Net Core’s configuration system is based on the idea of layering configuration providers in a stack, where the last-in wins. If you want to allow your developers to be able to override specific values for development purposes, for example, to point at a personal database instance (maybe even a local database, like SQL Server or the Cosmos DB Emulator), you can still pass those as environment variables, in appsettings.Development.json, or as ‘dotnet user-secrets’. This doesn’t necessarily defeat the purpose of centralizing secret or config management. The benefits of centralization apply most to shared resources. If you want to use a personal resource, there’s no harm in personalizing your config locally. An alternate approach to personalization is to provide your own complete set of resources that make up an environment in Azure. Ideally, you already have a script or template to create a new environment easily, and if you don’t, I strongly recommend it, in which case you’ll get your own Key Vault as well, and you can simply point your local app at it.

Lastly, I’d like to address the question of whether this makes it impossible to do fully offline local development. There are a couple of considerations here:

  1. How to target local services instead of live-hosted ones
  2. Overcoming the fact that the Key Vault configuration provider relies on an internet connection.

The first is handled the same way you would handle configuration personalization, by overriding any config settings in something like appsettings.Development.json or ‘dotnet user-secrets’ to target your local database or Azure service emulator. The second is relatively simple, just put the line of code that configures Key Vault as a config provider within an ‘if’ condition that checks to see if you are running in a development environment (see a sample approach here). This is assuming that Key Vault is truly your only remaining dependency on an internet connection. If it seems strange to hear me recommend disabling Key Vault after advocating for it, consider again that the benefits of centralized configuration apply most to using shared resources, so if you are designing to support an entirely local development environment then using Key Vault becomes unnecessary when running in that mode.

Using centralized configuration services like Key Vault via managed identity requires a different mindset for developers, but it comes with clear advantages, especially when it comes to limiting the exposure of your application secrets. This kind of solution is an absolute win from a security perspective, and it has the potential to considerably improve your development team’s experience as well. Due to Asp.Net Core’s pluggable configuration system, it’s easy to apply to existing projects, especially if you’re already storing secrets in Key Vault, so consider looking at how you could incorporate it into your existing projects today, and don’t miss out on the chance to try it in your next greenfield project. Your security advocates and fellow developers just might thank you.

If you are like me, you have used cloud services in a limited fashion to create VM’s for testing or perhaps you have used them extensively. You’d also like to gain an understanding of the broader group of services offered by cloud providers. In my situation, this was due to the recent attainment of an Engagement Manager position and my desire to help AIS expand our business through the development of new opportunities. I realized that I needed to have at least a top layer understanding our offerings in order to realize potential use cases AIS could present to solve problems, more cost-effective options to current solutions, and develop completely new solutions to improve client business. It was obvious to start with Microsoft’s Azure and Amazon’s AWS platforms, being that these are the top focus of AIS and the industry as a whole.

What was not obvious, was where to start. Both platforms are not only extremely broad but also moving targets. I needed to find a way to dip into this process without drowning in all the information, in addition to holding the responsibilities in my day job. I looked at classroom training options, YouTube videos, and continued researching until I stumbled upon two paths. These paths not only provided a nice prepackaged set of materials, but I could complete at my own pace, at home, and they resulted in certifications. I will get to the details, but first a word about certifications.

I am sure many of you will be rolling your eyes when you read the “certifications” aspect of that second to the last sentence. Yes, certifications are not as valuable an indicator of a person’s skills and knowledge in an area as real-world experience. However, they provide the following benefits in order of least to most important:

  1. Provide a good starting point for someone that has no current projects in an area.
  2. Fill knowledge gaps that even a person with experience in an area has, especially in those services or techniques that are not used often.
  3. Provide value to AIS in maintaining various statuses.
  4. Provide a potential client with proof that you at least have an understanding of the basics.
  5. Most importantly, they result in a $500 bonus from AIS, and reimbursement of testing and training costs!

The paths I found are the Microsoft Azure Fundamentals learning path and certification and the Amazon AWS Cloud Practitioner training and certification. The training for both of these includes videos with the Azure path including an estimated ten hours of content and the AWS training about five hours. The Azure path estimates were spot on, and the AWS training took a bit longer, due to my complete lack of experience with the platform.

Microsoft Azure Fundamentals

This path included videos, reading, hands-on experience, and quick knowledge checks. It can be completed with an Azure account that you create just for the training or an account linked to the AIS subscription if you have one. Both the reading and videos provide just enough information, but not get bogged down in the minutia. The only thing I had done with Azure prior to the training created a few VM’s to set up SharePoint environments. I had done that years ago, but I didn’t do that much within those environments.

For me, most of the content was new. I believe if I had a more in-depth experience, the training would have filled in gaps with specific details.

These were the topics I found either completely new or helpful in understanding how to look at and/or pitch Azure services to clients:

  • Containers, app services, and serverless options and how they work
  • Reducing latency with the traffic manager
  • Azure policies and tags to enforce standards
  • Review of data centers, region pairs, geographies, availability zones
  • Various was to predict costs and manage costs such as calculators, Cost Manager, and Azure Advisor

The training took me probably two-thirds of the estimated time, after which I went through the knowledge checks for each section once more. After that, I spent maybe an hour reviewing some things from the beginning. From there, I took an exam and passed. The exam process was interesting and can be done from home with some software that enables someone to watch you. Prior to the exam, you are required to show the person the entire room and fix anything that might enable you to cheat.

After I completed the certification process, I submitted the cost of the exam ($100) as an expense as well as submitted my request for a certification bonus. I received both in a timely manner. See links at the end of this post for materials concerning reimbursements and bonuses. Don’t forget approval from your EM/AE prior to incurring any costs for which you might want reimbursement and to submit your updated certifications spreadsheet to the AIS PI Team.

AWS Cloud Practitioner

This path exclusively contains videos. In my opinion, the content is not as straight forward as the Azure Fundamentals content and the videos cannot be sped up, which can be very frustrating. The actual content was a bit difficult to find. I have provided links at the conclusion of this post for quick reference. Much of the video content involves Linux examples, so Putty and other command-line tools were used. This added a further layer of complexity that I felt took away from the actual content (do I really need to know how to SSH into something to learn about the service?).

As far as content, everything is video, there is no reading, hands-on examples, and knowledge checks. I felt the reading in the Azure path broke things up. The hands-on exercises crystalized a few things for me, and the knowledge checks ensured I was tracking. I would like to see Amazon add some of these things. That being said, the videos are professionally done and included helpful graphics. With zero experience with AWS, I am still finding that I am able to grasp concepts and the videos do a decent job of presenting use cases for each service.

My biggest complaint is the inability to speed up videos that are obviously paced for the lowest common denominator and I find admittedly ADD attention waning often. Something I found that helps is taking notes. This allowed me to listen, write and not get bored.

Amazon provides a list of recommended prep (see links below) that includes self-paced training, a one-day classroom option, exam guide, a list of four base white papers and links to many others, practice exams, as well as a link to the schedule certification exam. I scanned the whitepapers. They all looked like they were useful, but not necessary to knock out the exam. I say this with confidence as I was able to pass the exam without a detailed review of the whitepapers. My technique was to outline the videos, then review them over the course of a couple weeks.

Summary

Whether you are a budding developer or analyst wishing to get a broad overview, a senior developer that wants to fill gaps, or a new EM like me who wants a bit of both, the Microsoft Azure Fundamentals learning track/certification and Amazon AWS Cloud Practitioner training/certification is a good place to start. AIS will cover any costs and provide you with some additional scratch for your effort. Obtaining these certifications also improves AIS standings with providers, clients, and the community as a whole. It also greatly improves your value to clients, meets the criteria of certain AIS Career Paths and Competencies, and who knows, you might learn something!

Links:

  1. Azure Fundamentals Learning Path: https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
  2. Azure Fundamentals Cert: AZ900 Microsoft Azure Fundamentals Exam.
  3. AWS Cloud Practitioner Preparation and Cert:  AWS Recommended Prep
  4. Training Reimbursements and Certification Bonuses: https://appliedis.sharepoint.com/sites/HR/Pages/Additional-benefits.aspx
  5. Submitting certification inventory to PI Team: Reach out to AIS – Process Improvement Team (ais-pi-team@appliedis.com) for more info.
As promised in the previous blog post, here is a detailed explanation of how to connect to APIs secured in the Azure AD from SharePoint Framework (SPFx) web parts. Please read part I of this blog for a thorough understanding of the SharePoint Framework, comparing it with other models, and its constraints/disadvantages before diving in further.  

Connecting to APIs is essential functionality in today’s communication as it extends the versatile communication with outside data repositories. SharePoint web parts can render data not only from SharePoint lists and libraries but also from external repositories. The data can be owned by anyone outside the organization. The external repositories can be connected for data retrieval from SharePoint via API (Application Programming Interface) call. The external repositories can be on different platforms, domains, etc. SPFx comes with many namespaces to leverage the communication between the SPFx web part from SharePoint online with other repositories via API call.

Types of API Communications from SPFx

  • Connect to SharePoint APIs to access data residing in SharePoint (SPHttpClient with OData) 
  • This is access data residing in SharePoint lists/libraries 
  • Connect to Microsoft Graph (MSGraphClient through MSGraphClientFactory) 
  • This is access users and other user-related info from Azure Active Directory (AAD) 
  • Connect to enterprise APIs secured in Azure AD (enterprise APIs using AadHttpClient & aadHttpClientFactory) 
  • Connect to Azure API secured in Azure AD from the SharePoint Framework web part in single-tenant implementation, where both Azure and SharePoint online are under the same tenant. This blog covers the details of implementing this functionality. 
  • Connect to Azure API secured in Azure AD from the SharePoint Framework web part in multi-tenant implementation, where both Azure and SharePoint online are in different tenant’s 
  • Connect to anonymous APIs (using HttpClient to connect to public APIs for weather etc.) 
  • Anonymous API’s are used to access any weather and other publicly available API’s

Connect to Azure AD Secured APIs

Microsoft Web API Permissions

Figure Credit: Microsoft

Pre-Requisites

If your environment is already set up, ensure you have the latest version of Yeoman SharePoint generator by entering: 

npm update – g @microsoft/generator-sharepoint@latest

Steps to Develop, Deploy, and Test SPFx Connecting to Function API Secured in Azure AD

Once all the pre-requisites are met, follow the steps below to develop, deploy, and test the SharePoint Framework connecting to Azure API secured in an Azure active directory. 

1. Create an Azure function (HttpTrigger) returning mock data

Create Azure Function

Figure: Create a new Azure function

New Function Created

Figure: New Azure function is created

2. Create a HttpTrigger & add C# code to return the list of orders

Create Http Trigger

Figure: C# Azure Function code to return the list of orders

DOWNLOAD THE AZURE FUNCTION CODE HERE

3. Secure the azure function by enabling the authentication/authorization via Azure Active Directory (AAD) and create an app in AAD. Verify azure function works when called from the browser.

Secure Azure Function Screenshot

Figure: Configure authentication/authorization for the Azure function

4. Enable ‘App Service Authentication’

Authenticating App Services Screenshot

Figure: Selection Azure Active Directory for authentication/authorization

5. Active Directory authentication is set up & the API is secured in an Azure AD
Register Azure Function

Fig: Registered Azure function in Azure AD

6. Enable CORS (Cross-Origin Resource Sharing). Even though Azure & SharePoint Online are in the same tenant, they are in different domains.

Cross Origin Resource Sharing

Figure: Configure CORS

7. Add the SharePoint tenant URL.

Add SharePoint for CORS

    Figure: Add the SharePoint for CORS to authenticate the SharePoint site in Azure

8. Azure function API is secured in Azure AD & the application ID will be used in the SPFx web part.


Azure function installed on AD

Figure: Azure function is registered in the Azure AD as an Enterprise application

9. SharePoint online tenant/admin center in O365.

Available sites in SharePoint admin center

Figure: Available sites in the SharePoint admin center

10. Create an SPFx web part project to render the data by connecting to API secured in Azure AAD. Use Yeomen to generate a web part. Use this link for more information on generating web parts.

Yeoman generator to generate SPFx web part

Figure: Yeoman generator to generate SPFx web part

11. Add web API permission requests in config/package-solution.json file.

src\webparts\[webpartname]\config\package-solution.json – add two web api permission requests

 “webApiPermissionRequests”: [
{
“resource”: “contoso-api”,
“scope”: “user_impersonation”
},

{
“resource”: “Windows Azure Active Directory”,
“scope”: “User.Read”
},
]

SPFx web part configuration file

Figure: API permissions in SPFx web part configuration file

12. Import namespaces for enterprise API communication.

import AadHttpClient to connect with API in src\webparts\[webpartname]\[webpartname]WebPart.ts

import { AadHttpClient, HttpClientResponse } from ‘@microsoft/sp-http’;

src\webparts\[webpartname]\[webpartname]WebPart.ts

namespaces for enterprise API communication

DOWNLOAD THE WEB PART CODE HERE

13. Build, package, and upload the package to the SharePoint App Catalog.

gulp bundle –ship && gulp package-solution –ship

gulp clean (for redeploying after updates)

14. Add the SPFx package to the tenant app catalog in your Office 365 tenant. SPFx deploys API related file to the SharePoint admin center.

SPFx deploys API related file to the SharePoint

15. Approve requested API permissions.

From the SharePoint admin center in Office 365/SharePoint, approve the API from API Management page

Once the API is approved, the SPFx web part can be added to the SharePoint site page


API permissions are available in SharePoint online admin center

Figure: API permissions are available in SharePoint online admin center

16. Create a new site page from the developers’ site and add the SPFx web part.

Add the web part to SharePoint page

Figure: Add the web part to the SharePoint page

17. If all goes well, your web part will be rendered with data that is served from the API call!
Rendered with Data

SPFx – Connect to APIs Gotchas

  • Connecting to API secured in Azure AD did not work via SPFx AadHttpClient & aadHttpClientFactory in SharePoint 2019 on-premise. The this.context.aadHttpClientFactory did not work. I choose a web part that is part of “SharePoint 2019 and SharePoint Online” when creating the SPFx web part via the Yeomen generator. Choose only ‘SharePoint Online’ to use AadHttpClient & aadHttpClientFactory
  • Microsoft example code did not work as it is. The azure function needed a slight tweak. Additional permission must be added “webApiPermissionRequests” in the config\package-solution.json

{

“resource”: “Windows Azure Active Directory”,

“scope”: “User.Read”

}

  • Single-tenant vs multiple tenant access
    • First, I set up azure function and secure it in AAD API in a personal MSDN subscription tenant and tried to connect from Developer (Free) O365/SharePoint Online SPFx web part. But the API permission in the SharePoint admin center could not get approved. The permission to access API from a different tenant did not work with the way Azure function API is configured in AAD.
    • To overcome that issue set up an Azure tenant under developer O365 and with the same credential. The API was able to get approved. More Info…
    • To overcome this, need to configure AAD API authentication to multi-tenant. More info…
  • gulp clean – important when re-deploying, otherwise new updates will not get updated
  • Debugging is quite important for troubleshooting 

TRY IT YOURSELF! DOWNLOAD THE CODE TO GET STARTED.

Further ahead:

This blog is a follow on about Azure Cognitive Services, Microsoft’s offering for enabling artificial intelligence (AI) applications in daily life. The offering is a collection of AI services with capabilities around speech, vision, search, language, and decision.

In Azure Cognitive Services Personalizer: Part One, we discussed the core concepts and architecture of Azure Personalizer Service, Feature Engineering, its relevance, and its importance.

In this blog, Part Two, we will go over a couple of use cases in which Azure Personalizer Service is implemented. We will look at features used, reward calculation, and their test run result. Stay tuned for Part Three, where we will list out recommendations and capacities for implementing solutions using Azure Personalizer Service.

Use Cases and Results

Two use cases implemented using Personalizer involves the ranking of content for each user of a business application.

Use Case 1: Dropdown Options

Different users of an application with manager privileges would see a list of reports that they can run. Before Personalizer was implemented, the list of dozens of reports was displayed in alphabetical order, requiring most of the managers to scroll through the lengthy list to find the report they needed. This created a poor user experience for daily users of the reporting system, making for a good use case for Personalizer. The tooling learned from the user behavior and began to rank frequently run reports on the top of the dropdown list. Frequently run reports would be different for different users, and would change over time for each manager as they get assigned to different projects. This is exactly the situation where Personalizer’s reward score-based learning models come into play.

Context Features

In our use case of dropdown options, the context features JSON is as below with sample data

{
    "contextFeatures": [
        { 
            "user": {
                "id":"user-2"
            }
        },
        {
            "scenario": {
                "type": "Report",
                "name": "SummaryReport",
                "day": "weekend",
                "timezone": "est"
            }
        },
        {
            "device": {
                "mobile":false,
                "Windows":true,
                "screensize": [1680,1050]
            }
        }
    ]
}

Actions (Items) Features

Actions were defined as the following JSON object (with sample data) for this use case

{
    "actions": [
    {
        "id": "Project-1",
        "features": [
          {
              "clientName": "Client-1",
              "projectManagerName": "Manager-2"
          },
          {

                "userLastLoggedDaysAgo": 5
          },
          {
              "billable": true,
              "common": false
          }
        ]
    },
    {
         "id": "Project-2",
         "features": [
          {
              "clientName": "Client-2",
              "projectManagerName": "Manager-1"
          },
          {

              "userLastLoggedDaysAgo": 3
           },
           {
              "billable": true,
              "common": true
           }
        ]
    }
  ]
}

Reward Score Calculation

Reward score was calculated based on the actual report selected (from the dropdown list) by the user from the ranked list of reports displayed with the following calculation:

  • If the user selected the 1st report from the ranked list, then reward score of 1
  • If the user selected the 2nd report from the ranked list, then reward score of 0.5
  • If the user selected the 3rd report from the ranked list, then reward score of 0
  • If the user selected the 4th report from the ranked list, then reward score of – 0.5
  • If the user selected the 5th report or above from the ranked list, then reward score of -1

Results

View of the alphabetically ordered report names in the dropdown before personalization:

alphabetically ordered report names in the dropdown before personalization

View of the Personalizer ranked report names in the dropdown for the given user:

Azure Personalizer ranked report names based on frequency

Use Case 2: Projects in Timesheet

Every employee in the company logs a daily timesheet listing all of the projects the user is assigned to. It also lists other projects, such as overhead. Depending upon the employee project allocations, his or her timesheet table could have few to a couple of dozen active projects listed. Even though the employee is assigned to several projects, particularly at lead and manager levels, they don’t log time in more than 2 to 3 projects for a few weeks to months.

Before personalization, the projects in the timesheet table were listed in alphabetical order, again resulting in a poor user experience. Even more troublesome, frequent user errors caused the accidental logging of time in the incorrect row. Personalizer was a good fit for this use case as well, allowing the system to rank projects in the timesheet table based on time logging patterns for each user.

Context Features

For the Timesheet use case, context features JSON object is defined as below (with sample data):

{
    "contextFeatures": [
        { 
            "user": {
                "loginid":"user-1",
                "managerid":"manager-1"
		  
            }
        },
        {
            "scenario": {
                "type": "Timesheet",
                "day": "weekday",
                "timezone": "ist"
            }
        },
        {
            "device": {
                "mobile":true,
                "Windows":true,
                "screensize": [1680,1050]
            }
        }
     ]
}

Actions (Items) Features

For the timesheet use case, the Actions JSON object structure (with sample data) is as under:

{
    "actions": [
    {
        "id": "Project-1",
        "features": [
          {
              "clientName": "Client-1",
              "userAssignedForWeeks": "4-8"
          },
          {

              "TimeLoggedOnProjectDaysAgo": 3
          },
          {
              "billable": true,
              "common": false
          }
        ]
    },
    {
         "id": "Project-2",
         "features": [
          {
              "clientName": "Client-2",
              "userAssignedForWeeks": "8-16"
          },
          {

              " TimeLoggedOnProjectDaysAgo": 2
           },
           {
              "billable": true,
              "common": true
           }
        ]
    }
  ]
}

Reward Score Calculation

The reward score for this use case was calculated based on the proximity between the ranking of projects in timesheet returned by the Personalizer and the actual projects that the user would log time as follows:

  • Time logged in the 1st row of the ranked timesheet table, then reward score of 1
  • Time logged in the 2nd row of the ranked timesheet table, then reward score of 0.6
  • Time logged in the 3rd row of the ranked timesheet table, then reward score of 0.4
  • Time logged in the 4th row of the ranked timesheet table, then reward score of 0.2
  • Time logged in the 5th row of the ranked timesheet table, then reward score of 0
  • Time logged in the 6th row of the ranked timesheet table, then reward score of -0.5
  • Time logged in the 7th row or above of the ranked timesheet table, then reward score of -1

The above approach to reward score calculation considers that most of the time users would not need to fill out their timesheet for more than 5 projects at a given time. Hence, when a user logs time against multiple projects, the score can be added up and then capped between 1 to -1 while calling Personalizer Rewards API.

Results

View of the timesheet table having project names alphabetically ordered before personalization:

project names alphabetically ordered before Azure personalization

View of the timesheet table where project names are ordered based on ranking returned by Personalization Service:

timesheet table ordered by Azure Personalization Service

Testing

In order to verify the results of implementing the Personalizer in our selected use cases, unit tests were effective. This method was helpful in two important aspects:

  1. Injecting the large number of user interactions (learning loops)
  2. In simulating the user behavior towards a specific pattern

This provided an easy way to verify how Personalizer reflects the current and changing trends injected via Unit Tests in the user behavior by using reward scores and exploration capability. This also enabled us to test different configuration settings provided by Personalizer Service.

Test Run 1

This 1st test run simulated different user choices with different explorations settings. The test results show the number of learning loops that started reflecting the user preference from intermittent to a consistent point.

Unit Test Scenario
Learning Loops, Results and Exploration Setting
User selection of Project-A Personalizer Service started ranking Project-A at the top intermittently after 10 – 20 learning loops and ranked it consistently at the top after 100 learning loops with exploration set to 0%
User selection of Project-B Personalizer Service started reflecting the change in user preference (from Project-A to Project-B) by ranking Project-B at the top intermittently after 100 learning loops and ranked it consistently at the top after 1200 learning loops with exploration set to 0%
User selection of Project-C

 

Personalizer Service started reflecting the change in user preference (from Project-B to Project-C) by ranking Project-C at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 150 learning loops with exploration set to 50%

 

Personalizer adjusted with the new user preference quicker when exploration was utilized.

 

User selection of Project-D

 

Personalizer Service started reflecting the change in user preference (from Project-C to Project-D) by ranking Project-D at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 120 learning loops with exploration set to 50%

 

Test Run 2

In this 2nd test run, the impact of having and removing sparse features (little effective features) is observed.

Unit Test Scenario
Learning Loops, Results and Exploration Setting
User selection of Project-E Personalizer Service started reflecting the change in user preference (from Project-D to Project-E) by ranking Project-E at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 150 learning loops with exploration set to 20%
User selection of Project-F Personalizer Service started reflecting the change in user preference (from Project-E to Project-F) by ranking Project-F at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 250 learning loops with exploration set to 20%
User selection of Project-G Two less effective features (sparse features) of type datetime were removed. Personalizer Service started reflecting the change in user preference (from Project-F to Project-G) by ranking Project-G at the top intermittently after 5 – 10 learning loops and ranked it almost consistently at the top after only 20 learning loops with exploration set to 20%

 

User selection of Project-H

 

Two datetime sparse features were added back. Personalizer Service started reflecting the change in user preference (from Project-G to Project-H) by ranking Project-H at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 500 learning loops with exploration set to 20%

 

Thanks for reading! In the next part of this blog post, we will look at the best practices and recommendations for implementing Personalizer solutions. We will also touch upon the capacities and limits of the Personalizer service at present.

windows_Azure_logo12Microsoft has opened their two newest Azure regions on 27 October 2014 in Australia as detailed in their press release at New Microsoft Azure Geo opens for business in Australia. With two new regions on-line, Microsoft brings their total number of Azure data centres to 19 worldwide. The two new locations are located in New South Wales and Victoria and bring with the full Azure feature set including Compute, Geo-redundant Storage, and Data Services.  Read More…
Welcome to part six of our blog series based on my latest PluralSight course: Applied Azure. Previously, we’ve discussed, HIPAA Compliant Apps with Windows Azure Trust Center,  Azure Web Sites, Azure Worker RolesIdentity and Access with Azure Active Directory and Azure Service Bus and MongoDB.

Motivation

Question: “How does an admin protect their SharePoint farm from poorly written custom code?” Answer: “Force custom code to run in the SharePoint sandbox mode.” Not quite! Turns out that running in a sandbox mode (as the name suggests, it is a restricted execution mode within SharePoint) is not very productive because of the performance penalty and very limited capabilities available to code running in it. A better approach is to move the code “outside” of SharePoint and into a “private” execution environment (so that the errant developers can shoot themselves in the foot, but not everyone else). Read More…

Amazon Web Services (AWS) CTO Werner Vogels offers this great piece of cloud advice: “Treat everything as a programmable resource, including data centers, networks, compute, storage and load balancers.”

In other words, automate every aspect of your (cloud-based) infrastructure.

Given AIS’ years of experience with SharePoint, we are always looking for ways to make the underlying infrastructure more cost effective, scalable and robust. Fortunately, the benefits of automation apply equally to a SharePoint 2013 farm hosted in the cloud — whether it’s the ability to dynamically provision a SharePoint 2013 farm on the fly, or the ability to scale up and down based on load, or the ability to make the SharePoint 2013 farm more fault-resilient.

We’ve written about two automated deployment approaches to SharePoint 2013; one for Amazon Web Services and one for Azure. In case you missed them…

Our AWS-based SharePoint 2013 script and source code can be found here.

Our Windows Azure-based SharePoint 2013 script and source code can be found here.

Our work with Rolling Stone and Bondi Digital Publishing is yet another example of how AIS can develop technology that creates new revenue streams for publishers. We built a digital distribution platform to usher print publications like Rolling Stone into the digital age – by providing them with a turnkey solution to deploy print magazine archives online for viewing on desktops, laptops and mobile devices. For Rolling Stone, the initial launch included more than 1,000 issues from 1967 to the present.

Click here to read more about the distribution platform and how we customized it for the Rolling Stone archives.