SPFx Modern Web Development

SharePoint has been a widely adopted and popular content management system for many large organizations over the past two decades. From SharePoint Portal Server in 2001 to SharePoint Server 2019 and SharePoint Online, the ability to customize the user experience (UX) has evolved dramatically, keeping pace with the evolution of modern web design. SharePoint Framework (SPFx) is a page and web part model that provides full support of client-side SharePoint development with support for open sources. SPFx works in SharePoint Online, and with on-premises SharePoint 2016, and SharePoint 2019. SPFx works both in modern and classic pages. SPFx Web Parts and Extensions are the latest powerful tool we can use to deliver great UX!

Advantages of using SharePoint Framework

1. It Can’t Harm the Farm

Earlier SharePoint (SP) customization code executed on the server from compiled, server-side code is written in a language such as C#. Historically, we created web parts as full trust C# assemblies that were installed on the SharePoint servers and had access to disrupt SharePoint for all users. Because it ran with far greater permissions on the server, it could adversely impact or even crash the entire farm. Microsoft tried to solve this problem in SP 2010 by implementing Sandbox solutions, followed by the App Model, now known as Add-In model.

SPFx development is based on JavaScript running in a browser, making REST API calls to the SharePoint and Office 365 back-end workloads and does not touch the internals of SharePoint.

The SharePoint Framework is a safer, lower-risk model for SharePoint development.

2. Modern Development Tools

Building SPFx elements using JavaScript and its wealth of libraries, the UX and UI can be shaped as beautifully as any modern website. The JavaScript is embedded directly to the page, and the controls are rendered in the normal page DOM.

SharePoint Framework development is JavaScript framework-agnostic. The toolchain is based on common open-source client development tools such as npm, TypeScript, Yeoman, webpack, and gulp. It supports open-source JavaScript libraries such as Node.js, React.js, Angular.js, Handlebars, Knockout, and more. These provide a lightweight and rapid user experience.

3. Mobile-First Design

“Mobile first”, as the name suggests, means that we start the product design from the mobile end, which has more restrictions to make the content usable in the small space of a phone. Next, we can expand those features to a more luxurious space to create a tablet or desktop version.

Because SharePoint Framework customizations run in the context of the current page (and not in an IFRAME), they are responsive, lightweight, accessible, and mobile-friendly. Mobile support is built-in from the start. Content reflows across device sizes and pages are fast and fluid.

4. Simplified Deployment

There is some work to do at the beginning of a new project to set up the SPFx structure to support reading from a remote host. An App Catalog must be created, as well as generating and uploading a manifest file. If the hosted content is connected with a CDN (Content Delivery Network), that will also require setup. However, once those structural pieces are in place, deployment is simplified to updating files on the host location. It does not require traditional code deployments of server-side code, with its attendant restrictions and security review lead time.

5. Easier Integration of External Data Sources

With SPFx, calls to data from external sources may be easier since it’s web content hosted outside of SharePoint.

SPFx Constraints and Disadvantages

The SharePoint Framework is only available in SharePoint Online,  on-premises SharePoint 2016, and SharePoint 2019 at the time of this blog. SPFx cannot be added to earlier versions of SharePoint such as SharePoint 2013 and 2010.

SharePoint Framework Extensions cannot be used in on-premises SharePoint 2016 but only in SharePoint 2019 and SharePoint Online.

SPFx, like any other client-side implementation, runs in the context of the logged-in user. The permissions cannot be elevated to impersonate as an admin user like in farm solutions, CSOM (client-side object model) context, or in SharePoint Add-ins and Office 365 web applications. The SharePoint application functionality is limited to the current user’s permission level, and customization is based on that as well. To overcome this constraint, a hybrid solution implementation with SPFx to communicate with Application Programming Interfaces (APIs). APIs would be registered as SharePoint add-in, that uses the app-only context to communicate with SharePoint. For this communication between SPFx and API to work, the API would need to support CORS (Cross-Origin Resource Sharing) as the communication would be through cross-domain client-side calls.

SPFx is also not for long-running operations as it is entirely client-side implementation. The web request cannot wait longer until it gets the response from the long-running web operation. Hence for those processes, the hybrid approach with long operations can be implemented in Azure web job/function, and SPFx can get the update from those via webhook.

Developers coming from server-side will have a learning curve with entirely client-side development. But TypeScript is there for the rescue.

SPFx Comparison to Other Technologies and Models

SharePoint lists come in handy for many organizations when entering data, but customers always ask for the ability to display the data in some reporting format, such as a dashboard. Below we compare the different ways we can accomplish this and why SPFx is a good fit:

  • Classic Web Part Pages: If we do not want to use the SharePoint Framework, SharePoint 2019 still supports the classic web part pages. You can add content editor web parts and deploy any custom JavaScript/jQuery code. However, with this approach, uploading the Js files in the SP library and manually adding pages in a library become cumbersome. We may end up writing custom JSOM (JavaScript object model) code to make the deployment easier. Microsoft does not recommend this approach, and there is the possibility that this will no longer be supported in the future. Also, with this approach, if you want to render any custom tables, you need to write custom code or use a third-party table. Using SharePoint Framework, we can easily use Office UI Fabric React components like Details list.
  • Custom App: We can design custom applications to deploy in the cloud, which can read the data from SharePoint. The challenge is that each customer environment is different. It’s not always easy to connect to SharePoint from the cloud in a production environment, especially with CAC (Common Access Card) authenticated sites.
  • PowerApps/LogicApps: With newer technologies such as PowerApps, Logic Apps, and Flow, we can design custom SharePoint Forms and business logic and connect to SharePoint using the SharePoint connector. In a production environment, it is not easy to get connection approved and to connect with on-premises data. PowerApps and Flow require the purchase of licenses.

Using SPFx, we can quickly design the dashboards using Office UI Fabric components. For deployment, we do not need to write any custom utility code, SharePoint framework package can create the lists and libraries as well.

Wrapping Up

We hope this blog provided an SPFx overview and its great functionalities. Please look forward to our next blog post (Part II) in developing and deploying custom SPFx Web Parts, Extensions, and connecting to API’s/Azure in SharePoint Online and SharePoint 2019!

Additional Links to get started in SPFx

kubernetes logoToday, let’s talk about network isolation and traffic policy within the context of Kubernetes.

Network Policy Specification

Kubernetes’ first-class notion of networking policy allows a customer to determine which pods are allowed to talk to other pods. While these policies are part of Kubernetes’ specification, tools like Calico and Cilium implement these network policies.

Here is a simple example of a network policy:

...
  ingress:
  - from:
    - podSelector:
        matchLabels:
          zone: trusted
  ...

In the above example, only pods with the label zone: trusted are allowed to make an incoming visit to the pod.

egress:
  - action: deny
    destination:
      notSelector: ns == 'gateway’

The above example deals with outgoing traffic. This network policy will ensure that traffic going out is blocked unless the destination is a node with the label ‘gateway’.

As you can see, network policies are important for isolating pods from each other in order to avoid leaking information between applications. However, if you are dealing with data that requires higher trust levels, you may want to consider isolating the applications at the cluster level. The following diagrams depict both logical (network policy based) and physical (isolated) clusters.

Diagram of a Prod Cluster Diagrams of Prod Team Clusters

Network Policy is NOT Traffic Routing…Enter Istio!

Network policies, however, do not allow us to control the flow of traffic on a granular level. For example, let’s assume that we have three versions of a “reviews” service (a service that returns user reviews for a given product). If we want the ability to route the traffic to any of these three versions dynamically, we will need to rely on something else. In this case, let’s use the traffic routing provided by Istio.

Istio is a tool that manages the traffic flow across services using two primary components:

  1. An Envoy proxy (more on Envoy later in the post) distributes traffic based on a set of rules.
  2. The Pilot manages and configures the traffic rules that let you specify how traffic should be routed.

Diagram of Istio Traffic Management

image source

Here is an example of Istio policy that directs all traffic to the V1 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

Here is a Kiali Console view of all “live” traffic being sent to the V1 version of the “reviews” service:

Kiali console screenshot

Now here’s an example of Istio policy that directs all traffic to the V3 version of the “reviews” service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v3

And here is a Kiali Console view of all “live” traffic being sent to the V3 version of the “reviews” service:

Kiali console screenshot v3

Envoy Proxy

Envoy is a lightweight proxy with powerful routing constructs. In the example above, the Envoy proxy is placed as a “sidecar” to our services (product page and reviews) and allows it to handle outbound traffic. Envoy could dynamically route all outbound calls from a product page to the appropriate version of the “reviews” service.

We already know that Istio makes it simple for us to configure the traffic routing policies in one place (via the Pilot). But Istio also makes it simple to inject the Envoy proxy as a sidecar. The following Kubectl command labels the namespace for automatic sidecar injection:

#--> Enable Side Car Injection
kubectl label namespace bookinfo istio-injection=enabled

As you can see each pod has two containers ( service and the Envoy proxy):

# Get all pods 
kubectl get pods --namespace=bookinfo

I hope this blog post helps you think about traffic routing between Kubernetes pods using Istio and Envoy. In future blog posts, we’ll explore the other facets of a “service mesh” – a common substrate for managing a large number of services, with traffic routing being just one facet of a service mesh.

The business intelligence, automation, and enterprise application landscape is changing dramatically.

In the previous incarnation of enterprise technology, line-of-business owners were forced to choose between pre-baked commercial off the shelf (COTS) software, which was difficult to customize and often did not truly meet the business’s unique needs, or custom solutions that (though flexible and often tailor-made to the business needs of the moment) cost more and were far riskier to develop and deploy.

Furthermore, certain classes of applications do not have a COTS answer, nor do they justify the cost of custom software development. In the chasm between the two arose a generation of quasi-apps: the homegrown Excel spreadsheets, Access databases, Google docs, and all manner of other back-of-the-napkin “systems.” End users developed these quasi-apps to fill the gaps between the big software IT provided and what users actually needed to do their jobs.

We’ve all been there: The massive spreadsheet that tracked a decade’s worth of employee travel but was always one accidental click away from oblivion. Or the quirky asset management database living on your officemate’s desktop (and still named after an employee who left the company five years ago); the SharePoint site full of sensitive HR data, or the shared network drive that had long been “shared” a bit too liberally. A generation of do-it-yourself workers grew up living on the edge of catastrophe with their quasi-apps.

Thankfully, three trends have converged to shatter this paradigm in 2019, fundamentally changing the relationship between business users, technologists, and their technology.

Connectivity of Everything

The new generation of business applications is hyper-connected to one another. They allow for connections between business functions previously considered siloed, unrelated, or simply not feasible or practical. This includes travel plans set in motion by human resources decisions, medical procedures scheduled based on a combination of lab results and provider availability, employee recruiting driven by sales and contracts.

Citizens’ Uprising

Business users long settled for spreadsheets and SharePoint, but new “low-code/no-code” tools empower these “citizen creators” with the capability to build professional grade apps on their own. Airport baggage screeners can develop mobile apps that cut down on paperwork, trainers and facilitators put interactive tools in the hands of their students, and analysts and researchers are no longer dependent on developers to “pull data” and create stunning visualizations.

New Ways of Looking at the World (& Your Data)

This isn’t just about business intelligence (BI) and data visualization tools far outpacing anything else that was recently available. It’s not even just about business users’ ability to harness and extend those tools. This is about the ability of tools like Microsoft Power BI to splice together, beautifully visualize, and help users interpret data that their organizations already own — data to which you’ve connected using one of the hundreds of native connectors to third-party services, and data generated every second of every minute of every day from the connected devices that enable the organization’s work.

It’s an exciting time. I’ve explored these trends further, plus how Microsoft’s Power Platform has become the go-to platform for organizations mastering the new landscape in my whitepaper, Microsoft’s Power Platform and the Future of Business Applications. We’re way past CRM. I hope you’ll read it and share your thoughts with me!

Accurately identifying and authenticating users is an essential requirement for any modern application. As modern applications continue to migrate beyond the physical boundaries of the data center and into the cloud, balancing the ability to leverage trusted identity stores with the need for enhanced flexibility to support this migration can be tricky. Additionally, evolving requirements like allowing multiple partners, authenticating across devices, or supporting new identity sources push application teams to embrace modern authentication protocols.

Microsoft states that federated identity is the ability to “Delegate authentication to an external identity provider. This can simplify development, minimize the requirement for user administration, and improve the user experience of the application.”

As organizations expand their user base to allow authentication of multiple users/partners/collaborators in their systems, the need for federated identity is imperative.

The Benefits of Federated Authentication

Federated authentication allows organizations to reliably outsource their authentication mechanism. It helps them focus on actually providing their service instead of spending time and effort on authentication infrastructure. An organization/service that provides authentication to their sub-systems are called Identity Providers. They provide federated identity authentication to the service provider/relying party. By using a common identity provider, relying applications can easily access other applications and web sites using single sign on (SSO).

SSO provides quick accessibility for users to multiple web sites without needing to manage individual passwords. Relying party applications communicate with a service provider, which then communicates with the identity provider to get user claims (claims authentication).

For example, an application registered in Azure Active Directory (AAD) relies on it as the identity provider. Users accessing an application registered in AAD will be prompted for their credentials and upon authentication from AAD, the access tokens are sent to the application. The valid claims token authenticates the user and the application does any further authentication. So here the application doesn’t need to have additional mechanisms for authentication thanks to the federated authentication from AAD. The authentication process can be combined with multi-factor authentication as well.

Glossary

Abbreviation Description
STS Security Token Service
IdP Identity Provider
SP Service Provider
POC Proof of Concept
SAML Security Assertion Markup Language
RP Relying party (same as service provider) that calls the Identity Provider to get tokens
AAD Azure Active Directory
ADDS Active Directory Domain Services
ADFS Active Directory Federation Services
OWIN Open Web Interface for .NET
SSO Single sign on
MFA Multi factor authentication

OpenId Connect/OAuth 2.0 & SAML

SAML and OpenID/OAuth are the two main types of Identity Providers that modern applications implement and consume as a service to authenticate their users. They both provide a framework for implementing SSO/federated authentication. OpenID is an open standard for authentication and combines with OAuth for authorization. SAML is also open standard and provides both authentication and authorization.  OpenID is JSON; OAuth2 can be either JSON or SAML2 whereas SAML is XML based. OpenID/OAuth are best suited for consumer applications like mobile apps, while SAML is preferred for enterprise-wide SSO implementation.

Microsoft Azure Cloud Identity Providers

The Microsoft Azure cloud provides numerous authentication methods for cloud-hosted and “hybrid” on-premises applications. This includes options for either OpenID/OAuth or SAML authentication. Some of the identity solutions are Azure Active Directory (AAD), Azure B2C, Azure B2B, Azure Pass through authentication, Active Directory Federation Service (ADFS), migrate on-premises ADFS applications to Azure, Azure AD Connect with federation and SAML as IdP.

The following third-party identity providers implement the SAML 2.0 standard: Azure Active Directory (AAD), Okta, OneLogin, PingOne, and Shibboleth.

A Deep Dive Implementation

This blog post will walk through an example I recently worked on using federated authentication with the SAML protocol. I was able to dive deep into identity and authentication with an assigned proof of concept (POC) to create a claims-aware application within an ASP.NET Azure Web Application using the federated authentication and SAML protocol. I used OWIN middleware to connect to Identity Provider.

The scope of POC was not to develop an Identity Provider/STS (Security Token Service) but to develop a Service Provider/Relying Party (RP) which sends a SAML request and receives SAML tokens/assertions. The SAML tokens are used by the calling application to authorize the user into the application.

Given the scope, I used stub Identity Provider so that the authentication implementation could be plugged into a production application and communicate with other Enterprise SAML Identity Providers.

The Approach

For an application to be claims aware, it needs to obtain a claim token from an Identity Provider. The claim contained in the token is then used for additional authorization in the application. Claim tokens are issued by an Identity Provider after authenticating the user. The login page for the application (where the user signs in) can be a Service Provider (Relying Party) or just an ASP.NET UI application that communicates with the Service Provider via a separate implementation.

Figure 1: Overall architecture – Identity Provider Implementation

Figure 1: Overall architecture – Identity Provider Implementation

The Implementation

An ASP.NET MVC application was implemented as SAML Service provider with OWIN middleware to initiate the connection with the SAML Identity Provider.

First, the communication is initiated with a SAML request from service provider. The identity provider validates the SAML request, verifies and authenticates the user, and sends back the SAML tokens/assertions. The claims returned to service provider are then sent back to the client application. Finally, the client application can authorize the user after reviewing the claims returned from the SAML identity provider, based on roles or other more refined permissions.

SustainSys is an open-source solution and its SAML2 libraries add SAML2P support to ASP.NET web sites and serve as the SAML2 Service Provider (SP).  For the proof of concept effort, I used a stub SAML identity provider SustainSys Saml2 to test the SAML service provider. SustainSys also has sample implementations of a service provider from stub.

Implementation steps:

  • Start with an ASP.NET MVC application.
  • Add NuGet packages for OWIN middleware and SustainSys SAML2 libraries to the project (Figure 2).
  • Modify the Startup.cs (partial classes) to build the SAML request; set all authentication types such as cookies, default sign-in, and SAMLl2 (Listing 2).
  • In both methods CreateSaml2Options and CreateSPOptions SAML requests are built with both private and public certificates, federation SAML Identity Provider URL, etc.
  • The service provider establishes the connection to identity on start up and is ready to listen to client requests.
  • Cookie authentication is set, default authentication type is “Application,” and set the SAML authentication request by forming the SAML request.
  • When the SAML request options are set, instantiate Identity Provider with its URL and options. Set the Federation to true. Service Provider is instantiated with SAML request options with the SAML identity provider. Upon sign in by the user, OWIN middleware will issue a challenge to the Identity Provider and get the SAML response, claim/assertion back to the service provider.
  • OWIN Middleware issues a challenge to SAML Identity Provider with the callback method (ExternalLoginCallback(…)). Identity provider returns that callback method after authenticating the user (Listing 3).
  • AuthenticateSync will have claims returned from the Identity Provider and the user is authenticated at this point. The application can use the claims to authorize the user to the application.
  • No additional web configuration is needed for SAML Identity Provider communication, but the application config values can be persisted in web.config.

Figure 2: OWIN Middleware NuGet Packages

Figure 2: OWIN Middleware NuGet Packages

Listing 1:  Startup.cs (Partial)

using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof(Claims_MVC_SAML_OWIN_SustainSys.Startup))]

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }
    }
}

Listing 2: Startup.cs (Partial)

using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Owin;
using Sustainsys.Saml2;
using Sustainsys.Saml2.Configuration;
using Sustainsys.Saml2.Metadata;
using Sustainsys.Saml2.Owin;
using Sustainsys.Saml2.WebSso;
using System;
using System.Configuration;
using System.Globalization;
using System.IdentityModel.Metadata;
using System.Security.Cryptography.X509Certificates;
using System.Web.Hosting;

namespace Claims_MVC_SAML_OWIN_SustainSys
{
    public partial class Startup
    {
        public void ConfigureAuth(IAppBuilder app)
        {            
            // Enable Application Sign In Cookie
            var cookieOptions = new CookieAuthenticationOptions
                {
                    LoginPath = new PathString("/Account/Login"),
                AuthenticationType = "Application",
                AuthenticationMode = AuthenticationMode.Passive
            };

            app.UseCookieAuthentication(cookieOptions);

            app.SetDefaultSignInAsAuthenticationType(cookieOptions.AuthenticationType);

            app.UseSaml2Authentication(CreateSaml2Options());
        }

        private static Saml2AuthenticationOptions CreateSaml2Options()
        {
            string samlIdpUrl = ConfigurationManager.AppSettings["SAML_IDP_URL"];
            string x509FileNamePath = ConfigurationManager.AppSettings["x509_File_Path"];

            var spOptions = CreateSPOptions();
            var Saml2Options = new Saml2AuthenticationOptions(false)
            {
                SPOptions = spOptions
            };

            var idp = new IdentityProvider(new EntityId(samlIdpUrl + "Metadata"), spOptions)
            {
                AllowUnsolicitedAuthnResponse = true,
                Binding = Saml2BindingType.HttpRedirect,
                SingleSignOnServiceUrl = new Uri(samlIdpUrl)
            };

            idp.SigningKeys.AddConfiguredKey(
                new X509Certificate2(HostingEnvironment.MapPath(x509FileNamePath)));

            Saml2Options.IdentityProviders.Add(idp);
            new Federation(samlIdpUrl + "Federation", true, Saml2Options);

            return Saml2Options;
        }

        private static SPOptions CreateSPOptions()
        {
            string entityID = ConfigurationManager.AppSettings["Entity_ID"];
            string serviceProviderReturnUrl = ConfigurationManager.AppSettings["ServiceProvider_Return_URL"];
            string pfxFilePath = ConfigurationManager.AppSettings["Private_Key_File_Path"];
            string samlIdpOrgName = ConfigurationManager.AppSettings["SAML_IDP_Org_Name"];
            string samlIdpOrgDisplayName = ConfigurationManager.AppSettings["SAML_IDP_Org_Display_Name"];

            var swedish = CultureInfo.GetCultureInfo("sv-se");
            var organization = new Organization();
            organization.Names.Add(new LocalizedName(samlIdpOrgName, swedish));
            organization.DisplayNames.Add(new LocalizedName(samlIdpOrgDisplayName, swedish));
            organization.Urls.Add(new LocalizedUri(new Uri("http://www.Sustainsys.se"), swedish));

            var spOptions = new SPOptions
            {
                EntityId = new EntityId(entityID),
                ReturnUrl = new Uri(serviceProviderReturnUrl),
                Organization = organization
            };
        
            var attributeConsumingService = new AttributeConsumingService("Saml2")
            {
                IsDefault = true,
            };

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("urn:someName")
                {
                    FriendlyName = "Some Name",
                    IsRequired = true,
                    NameFormat = RequestedAttribute.AttributeNameFormatUri
                });

            attributeConsumingService.RequestedAttributes.Add(
                new RequestedAttribute("Minimal"));

            spOptions.AttributeConsumingServices.Add(attributeConsumingService);

            spOptions.ServiceCertificates.Add(new X509Certificate2(
                AppDomain.CurrentDomain.SetupInformation.ApplicationBase + pfxFilePath));

            return spOptions;
        }
    }
}

Listing 3: AccountController.cs

using Claims_MVC_SAML_OWIN_SustainSys.Models;
using Microsoft.Owin.Security;
using System.Security.Claims;
using System.Text;
using System.Web;
using System.Web.Mvc;

namespace Claims_MVC_SAML_OWIN_SustainSys.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        public AccountController()
        {
        }

        [AllowAnonymous]
        public ActionResult Login(string returnUrl)
        {
            ViewBag.ReturnUrl = returnUrl;
            return View();
        }

        //
        // POST: /Account/ExternalLogin
        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public ActionResult ExternalLogin(string provider, string returnUrl)
        {
            // Request a redirect to the external login provider
            return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { ReturnUrl = returnUrl }));
        }

        // GET: /Account/ExternalLoginCallback
        [AllowAnonymous]
        public ActionResult ExternalLoginCallback(string returnUrl)
        {
            var loginInfo = AuthenticationManager.AuthenticateAsync("Application").Result;
            if (loginInfo == null)
            {
                return RedirectToAction("/Login");
            }

            //Loop through to get claims for logged in user
            StringBuilder sb = new StringBuilder();
            foreach (Claim cl in loginInfo.Identity.Claims)
            {
                sb.AppendLine("Issuer: " + cl.Issuer);
                sb.AppendLine("Subject: " + cl.Subject.Name);
                sb.AppendLine("Type: " + cl.Type);
                sb.AppendLine("Value: " + cl.Value);
                sb.AppendLine();
            }
            ViewBag.CurrentUserClaims = sb.ToString();
            
            //ASP.NET ClaimsPrincipal is empty as Identity returned from AuthenticateAsync should be cast to IPrincipal
            //var identity = (ClaimsPrincipal)Thread.CurrentPrincipal;
            //var claims = identity.Claims;
            //string nameClaimValue = User.Identity.Name;
            //IEnumerable<Claim> claimss = ClaimsPrincipal.Current.Claims;
          
            return View("Login", new ExternalLoginConfirmationViewModel { Email = loginInfo.Identity.Name });
        }

        // Used for XSRF protection when adding external logins
        private const string XsrfKey = "XsrfId";

        private IAuthenticationManager AuthenticationManager
        {
            get
            {
                return HttpContext.GetOwinContext().Authentication;
            }
        }
        internal class ChallengeResult : HttpUnauthorizedResult
        {
            public ChallengeResult(string provider, string redirectUri)
                : this(provider, redirectUri, null)
            {
            }

            public ChallengeResult(string provider, string redirectUri, string userId)
            {
                LoginProvider = provider;
                RedirectUri = redirectUri;
                UserId = userId;
            }

            public string LoginProvider { get; set; }
            public string RedirectUri { get; set; }
            public string UserId { get; set; }

            public override void ExecuteResult(ControllerContext context)
            {
                var properties = new AuthenticationProperties { RedirectUri = RedirectUri };
                if (UserId != null)
                {
                    properties.Dictionary[XsrfKey] = UserId;
                }
                context.HttpContext.GetOwinContext().Authentication.Challenge(properties, LoginProvider);
            }
        }
    }
}

Listing 4: Web.Config

<?xml version="1.0" encoding="utf-8"?>
<!--
  For more information on how to configure your ASP.NET application, please visit
  https://go.microsoft.com/fwlink/?LinkId=301880
  -->
<configuration>
  <appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="SAML_IDP_URL" value="http://localhost:52071/" />
    <add key="x509_File_Path" value="~/App_Data/stubidp.sustainsys.com.cer"/>
    <add key="Private_Key_File_Path" value="/App_Data/Sustainsys.Saml2.Tests.pfx"/>
    <add key="Entity_ID" value="http://localhost:57234/Saml2"/>
    <add key="ServiceProvider_Return_URL" value="http://localhost:57234/Account/ExternalLoginCallback"/>
    <add key="SAML_IDP_Org_Name" value="Sustainsys"/>
    <add key="SAML_IDP_Org_Display_Name" value="Sustainsys AB"/>
  </appSettings>

Claims returned from the identity provider to service provider:

Claims returned from the identity provider to service provider

Additional References

Azure Web Apps Background

I’ve been working with Azure Web Apps for a long time. Before the launch of Azure Web Apps for Containers (or even Azure Web App on Linux), these web apps ran on Windows Virtual Machines managed by Microsoft. This meant that any workload running behind IIS (i.e., ASP.Net) would run without hiccups — but that was not the case with workloads which preferred Linux over Windows (i.e., Drupal).

Furthermore, the Azure Web Apps that ran on Windows were not customizable. This meant that if your website required a custom tool to work properly, chances are it was not going to work on an Azure Web App, and you’d need to deploy a full-blown IaaS Virtual Machine. There was also a strict lockdown regarding tools and language runtime versions that you couldn’t change. So, if you wanted the latest bleeding-edge language runtime, you weren’t gonna get it.

Azure Web Apps for Containers: Drum Roll

Last year, Microsoft released the Azure Web Apps for Containers or Linux App Service plan offering to the public. This meant we could build a custom Docker image containing all the binaries and files, and then deploy it on the PaaS offering. After working with the product for some time, I was like..

The product was excellent, and it was clear that it had potential. Some of  the benefits:

  • Ability to use a custom Docker image to run the Web App
  • Zero headaches from managing Docker containers
  • The benefits of Azure Web App on Windows like Backups, Kudu, Deployment Slots, Autoscaling (Scale up & Scale out), etc.

Suddenly, running workloads that preferred Linux or required custom binaries became extremely easy.

The Architecture

Compared to Azure Web App on Windows, the architecture implemented in Azure Web App for Containers is different.

diagram of Azure web apps architecture

Each of the above Web Apps is strictly locked down with minimal possibility of modification. Furthermore, the backend storage was based on Network File Shares which means that even if you don’t want any storage (like in cases when your app simply reads data from the database and displays it back), the app would still perform slowly.

diagram of Azure web apps architecture

The major difference is that the Kudu/SCM site runs in a separate container from the actual web app. Both containers are connected to each other with a private network. In this case, each App Service Plan is deployed on a separate Virtual Machine and all the plumbing is managed by Microsoft. The benefits of this approach are:

  • Better isolation. If your Kudu is experiencing issues, it reduces the chance of taking down your actual website.
  • Ability to customize the actual web app container running the website.
  • Better resource utilization

Stay tuned for the next part in which I would be discussing the various options related to Storage which are available in Azure Web App for Containers and their trade-offs.

Happy holidays!

Did you know you can build an intelligent twitter bot and run it for just pennies a month using Azure’s Logic and Function apps, coupled with Microsoft’s Language Understanding Intelligence Service (LUIS)? LUIS can “read” a tweet and determine the tweet’s sentiment with a little help from you. Run selected tweets through your LUIS app, determine their meaning, and then use that meaning to create a personalized tweet back at the original.

Here’s how…

Step One: Select a Twitter Query

Use Twitter’s advanced search tools to craft a query to narrow down your selection of tweets to the specific messages you want your bot to respond to. Your Azure charges will be usage-based, so you want this query to be specific enough to only pick up the kinds of messages your LUIS app will know how to respond to.

Step Two: Create an App with LUIS

If you don’t already have a LUIS app to use, follow the steps here to create your new LUIS app. For your utterances, I recommend using a sampling of tweets that were returned using the twitter query you created. Copy as many tweets from your query as possible into the LUIS test tool and assign them to the correct intent as needed. Train and publish your app before continuing.

Step Three: Create a Function App

Use the steps here to create a new Function App with a HTTP trigger.

Once you have the app and trigger created, download the function by clicking “Download app content.”

Screenshot with Download App content highlighted

Unzip your app and open it in Visual Studio. Add classes for the LUIS Prediction:

public class Prediction
    {
        [JsonProperty(PropertyName = "query")]
        public string Query { get; set; }

        [JsonProperty(PropertyName = "topScoringIntent")]
        public Intent TopScoringIntent { get; set; }

        [JsonProperty(PropertyName = "intents")]
        public List Intents { get; set; }

        [JsonProperty(PropertyName = "entities")]
        public List Entities { get; set; }

        [JsonProperty(PropertyName = "luisPrediction")]
        public string LuisPrediction { get; set; }

        [JsonProperty(PropertyName = "desiredIntent")]
        public string DesiredIntent { get; set; }

        [JsonProperty(PropertyName = "isDesiredIntent")]
        public bool IsDesiredIntent { get; set; }
    }

public class Intent
    {
        [JsonProperty(PropertyName = "intent")]
        public string IntentValue { get; set; }

        [JsonProperty(PropertyName = "score")]
        public decimal Score { get; set; }
    }

   public class Entity
    {
        [JsonProperty(PropertyName = "entity")]
        public string EntityValue { get; set; }

        [JsonProperty(PropertyName = "type")]
        public string Type { get; set; }

        [JsonProperty(PropertyName = "startIndex")]
        public int StartIndex { get; set; }

        [JsonProperty(PropertyName = "endIndex")]
        public int EndIndex { get; set; }

        [JsonProperty(PropertyName = "score")]
        public decimal Score { get; set; }
    }

Then Modify your HTTPTrigger to parse the prediction:

[FunctionName("HttpTrigger")]
        public static async Task Run([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            dynamic data = await req.Content.ReadAsAsync          
            var prediction = ((JObject)data).ToObject();

            var message = GetTweetMessage(prediction);

            if (!string.IsNullOrEmpty(message))
            {
                return req.CreateResponse(HttpStatusCode.OK, message);
            }

            return req.CreateResponse(HttpStatusCode.NotFound);
        }

Replace “GetTweetMessage” with your own code to interpret the intent and entities (if defined/provided) and generate your tweet message. Then send the message string back in the response. Deploy your changes back to Azure. (Right click project in visual studio, select “Publish”, follow instructions)

Note: In order to use a free dev service plan for your function, you must turn its AlwaysOn setting to Off. You can only do this if you are using a HTTP trigger; a timer trigger won’t fire if you turn off AlwaysOn.

Do this by going to Application settings:

Screenshot with Application Settings highlighted
Toggle AlwaysOn and save the changes. You may now go to Platform features:

Screenshot with Platform features highlighted.

Then All Settings:

All Settings is highlighted.

Then scroll down to App Service Plan and choose Change App Service Plan:

All settings is highlighted.

Change the app service plan to your devtest (free) service plan.

Step Four: Create a Logic App

General information on creating new logic apps can be found here.

Once you’ve created your logic app, go to the Logic app designer:

Logic app designer is highlighted.
Create your first workflow item: a Twitter search tweets trigger. Use your search query from above and change the interval as needed:

Screenshot of query

Create your next workflow item by clicking the plus button at the bottom of your twitter search tweets trigger. Add a new LUIS get prediction action. (You will be prompted for your LUIS connection and key; you can find these in your LUIS app.) The connection value is your LUIS endpoint. Select your LUIS-connected app for the APP Id and then click on Utterance Text field. A flyout list of dynamic options will appear; choose Tweet text under the Twitter options. Leave Desired Intent blank.

Screenshot of Get Prediction input

Add a new flow item under Control -> Condition:

Screenshot of new flow item input.

This workflow checks for the Top Scoring Intent Name from LUIS. We don’t want to continue passing this message to our Azure function if LUIS did not recognize its intent, so we only continue if Top Scoring Intent is not equal to None.

The control flow added two boxes below it. One for If True, the other for If False. Leave If False blank, the workflow will stop here if LUIS has not returned a usable intent. In the If True box, add a new action for Azure Functions and select the function you created above.

In the Request Body field of your function trigger, put the LUIS Body parameter. Then add another Twitter action to Post a tweet. Use the Function’s body to post the resulting message. Include a link back to the original tweet to make the tweet appear as a quoted retweet:

Screenshot of If True input

Your overall logic should look something like this: (You can see this bot in action at @LeksBot.)

Twitter bot logic is pictured.

Step Five: Train & Improve Your Bot

Open your logic app and scroll down to Runs history. You can see each time your bot has triggered. If you see tweets that weren’t responded to properly, you can open up each run and inspect the flow. You can see the run’s parameters and make adjustments. Paste the tweet into your LUIS app and train it on the correct intent. Each time you do this your app will become “smarter” and make fewer mistakes.

After you have re-trained LUIS, (make sure you click Publish!), or made any adjustments to your flow, you can resubmit the same run (tweet) and make sure it’s processed correctly. Re-train and adjust as needed to improve your bot’s experience.

Best of Texas AwardTwo years ago, the Texas Workforce Commission (TWC) came to AIS with an outdated online budgeting tool called the “Texas Reality Check” for middle and high school students. The application, designed to give students a clear sense of how much their desired future lifestyle will cost and what education and career choices will support it, was plagued by performance and accessibility issues…and its young target demographic was simply tuning it out.

AIS modernized the site for teen sensibilities, streamlined the underlying information architecture for easier use, overhauled the content strategy and user experience, and made it fully compliant with the latest accessibility guidelines.  You can read more about our work on this project here.

The new and improved Texas Reality Check has since gone on to become the most popular application of the Labor Market and Career Information Department (LMCI) of the Texas Workforce Commission. And now it’s been honored with a 2018 “Best of Texas” Award for Best Application Serving the Public. The awards highlight the Texas state government’s top creative tech implementations of the year, for both internal improvements and public-facing services like TRC.

“Governmental and educational leaders in Texas are leveraging technology to improve cybersecurity, enhance citizen service and advance emergency response, among many other things,” said Teri Takai, executive director of the Center for Digital Government. “Congratulations to this year’s Best of Texas winners for the vital role they are playing in advancing information technology in Texas.”

We’re really proud of our work on this project and thrilled that school students all across Texas have responded to the site in such a positive and engaged way. We hope the application continues to inspire them to dream big…while also equipping them with the knowledge and tools they need to achieve their goals.

I am pleased to announce my latest Pluralsight course on PowerApps (Well…such is the nature of change in the cloud that there has already been a name change since I submitted this course for publication, only a few weeks back. The aspect of PowerApps covered in my course is now referred to as Canvas Apps.)

This course is designed for developers (both citizen and professional developers) interested in a low-code approach for building mobile applications.

Here’s some background on PowerApps, if you haven’t had a chance to play with it yet:

PowerApps is a productive low-code development platform. It allows you to very quickly build business applications that can run inside a web browser, on a phone or a tablet. PowerApps includes a web-based IDE (PowerApps Studio, a set of built-in cross-platform controls), an Excel-like expression language that also includes imperative constructs like variables and loops, and over 130 connectors to talk to any number of data sources — including SQL Server, Office 365, Salesforce, Twitter, etc. You can also use custom connectors to talk to your domain-specific data source.

Beyond the controls, language expression and connectors, PowerApps provides ALM support in the form of app versioning, app publication to various app stores, swim-lanes for development environments, authentication and authorization (via Azure AD), RBAC controls, and security polices like data loss prevention (DLP).  All in all, the PowerApps service seeks to significantly lower the bar for building and distributing cross-platform mobile applications within your enterprise.

For a concrete example of our use of PowerApps, please read how we built a cross-platform event app in less than a week. Also please check out a recent episode of DotNetRocks where we talk about PowerApps.

Finally, as part of the latest spring update, PowerApps is combining with Dynamics 365 for Sales, Marketing, and Talent applications to offer an enterprise high-productivity application platform as a service (known as Microsoft Business Applications platform). What this means for PowerApps developers is that:

  1. They can now take advantage of server-side logic
  2. They have access to a data-centric way of building declarative apps, known as model-driven apps (in contrast to canvas apps, which are built by dragging and dropping controls to a canvas).

For more information on the spring update, please refer to this blog post by Frank Weigel.

I hope you will find this course useful. Please reach out to me via this blog or Twitter if you have any questions or comments.

If you need managed services to maintain peak IT network operations, consider us here at Applied Information Sciences. We’ll manage all your IT services for a predictable cost so you can focus on more strategic investments. AIS’ Managed Services Practice provides ongoing responsibility for monitoring, patching and problem resolution for specific IT systems on your company’s behalf.

Capabilities

  • Patching
  • Monitoring
  • Alerting
  • Backup and Restore
  • Incident Response

AIS’ Managed Service Practice has up to 24×7 coverage for initial responses to incidents through a combination of dedicated, part- and full-time staff, both onshore and offshore. AIS prides itself in being on the leading edge of managed services support. Our collaborative, disciplined approach is committed to quality, value, time and budget. Read More…

 

AIS recently completed work on a complete revamp of the Texas Workforce Commission’s “Texas Reality Check” website. Texas Reality Check is an Internet-available, fully accessible, responsive, mobile-first and browser-agnostic design. This website was tested for accessibility, performance, vulnerability scans, and usability.

Background

Texas Reality Check (TRC) is targeted at students on a statewide basis, ranging from middle school to high school (with some colleges and universities making use of the tool for “life skills” classes). The goal is to inspire students to think about occupations, and prepare for educational requirements so they can achieve the income level that meets their lifestyle expectations.

This tool walks students through different areas of life, on a step-by step-basis, identifying budgets associated with living essentials such as housing, transportation, food, clothing, etc. Students make selections and then calculate a corresponding monthly income that would afford the selections they make. From here, the students are directed to another page and connected to a database on careers and associated salaries.

However, the existing site was dated and in need of improvements in three core areas: UX, Accessibility, and overall performance. Here’s how AIS delivered:

Read More…