ACA Compliance Group needed help streamlining the communications landscape and its fast-growing workforce to collaborate more effectively. AIS recommended starting small with Microsoft Teams adoption and utilizing Microsoft Planner to gain advocates, realize quick wins, and gather insights to guide the larger rollout.

Starting Their Cloud Transformation Journey

The cloud brings many advantages to both companies and their employees, including unlimited access and seamless collaboration. However, to unleash the full power of cloud-based collaboration, a company must select the right collaboration technology that fits their business needs and ensures employees adopt the technology and changes in practices and processes. This ultimately benefits the business through increased productivity and satisfaction.

In early 2019, an international compliance firm with around 800 employees contacted AIS to help migrate multiple email accounts into a single Office 365 (O365) Exchange account. They invited AIS to continue their cloud journey and help them:

  • Understand their existing business processes and pain points across multiple time zones, countries, departments, and teams.
  • Provide their employees with a secure, reliable, and integrated solution to effective communication and collaboration.
  • Increase employee productivity by improving file and knowledge sharing and problem-solving.
  • Reduce cost from licensing fees for products duplicating features already available through the company’s enterprise O365 license.

Kicking Off a Customer Immersion Experience

First, AIS provided a Microsoft Customer Immersion Experience (CIE) demonstration, which served as the foundational step to introduce all O365 tools. After receiving stakeholder feedback, needs, and concerns, we collaboratively determined the best order for rolling out the O365 applications. The client selected to move forward with Microsoft Teams adoption as the first step to implementing collaboration software in the organization.

Pilots for Microsoft Teams Adoption

Next, we conducted a pilot with two departments to quickly bring benefits to the organization without a large cost investment and to gather insights that would inform the overall Teams adoption plan and strategy for the entire organization. We confirmed with pilot study employees that they saw and welcomed the benefits that Microsoft Teams provides, including:

  • Reduced internal emails.
  • Seamless communication and collaboration among (remote) teams/departments.
  • Increased productivity, efficiency, and transparency.
  • Centralized and accessible location for files, documents, and resources in Teams.

The pilot study also found that adopting Microsoft Teams in the organization would require a paradigm shift. Many employees were used to email communication, including sending attachments back and forth that was hard to track. In addition, while some departments had sophisticated collaboration tools, a common collaboration tool across the company did not exist. For web conferencing, for example, different departments preferred different tools, such as GoToMeeting and WebEx, and most of them incurred subscription fees. Employees had to install multiple tools on their computers to collaborate across departmental boundaries.

QUESTIONS ABOUT TEAMS ADOPTION PROCESS?

Embracing Benefits of Microsoft Teams with Organizational Change Management (OCM)

To help employees understand the benefits of Teams, embrace the new tool, and willingly navigate the associated changes. For the organization-wide deployment and Microsoft Teams adoption, we formed a project team with different roles, including: a Project Manager, Change Manager, UX researcher, Business Analyst, and Cloud Engineer. Organizational Change Management (OCM), User Experience (UX), and business analysis were as critical as technical aspects of the cloud implementation.

Building on each other’s expertise, the project team worked collaboratively and closely with technical and business leaders at the company to:

  • Guide communication efforts to drive awareness of the project and support it.
  • Identify levers that would drive or hinder adoption and plan ways to promote or mitigate.
  • Equip department leaders with champions and facilitate end-user Teams adoption best practices.
  • Guide end users on how to thrive using Teams through best practices and relevant business processes.
  • Provide data analytics and insights to support target adoption rates and customize training.
  • Use an agile approach to resolve both technical issues and people’s pain points, including using Teams for private chats, channel messages, and meetings.
  • Develop a governance plan that addressed technical and business evolution, accounting for the employee experience.

Cutting Costs & Boosting Collaboration

At the end of the 16-week engagement, AIS helped the client achieve its goals of enhanced collaboration, cost savings, and 90% Teams use with positive employee feedback. The company was well-positioned to achieve 100% by the agreed-upon target date.

Our OCM approach significantly contributed to our project success, which is grounded in the Prosci ADKAR® framework, a leading framework for change management based on 20 years of research. As Prosci described on their website, “ADKAR is an acronym that represents the five tangible and concrete outcomes that people need to achieve for lasting change”:

  • Awareness of the need for change
  • Desire to support the change
  • Knowledge of how to change
  • Ability to demonstrate skills and behaviors
  • Reinforcement to make the change stick

The OCM designed was to provide busy executives, leaders, and end-users with key support and insights for action to achieve each outcome necessary for Teams adoption efficiently and effectively.

If you would like to participate in a CIE demonstration or learn more about adopting cloud-based collaboration tools and practices in your company, we are here to help!

READ MORE ABOUT OUR SUCCESS WITH
ACA COMPLIANCE GROUP

The Internet of Things also called the Internet of Everything or the Industrial Internet, is a technology paradigm envisioned as a global network of machines and devices capable of interacting with each other. It is a network of Internet-connected devices that communicate embedded sensor data to the cloud for centralized processing. Here we build an end to end solution from device to cloud (see reference architecture diagram below) or an end to end IoT implementation of the application while covering all its aspects like alerting an Operator, shutting down a system, and more.

About the Microsoft Professional Program (MPP)

This program will teach you the device programming, data analytics, machine learning, and solution design skills for a successful career in IoT. It is a collection of courses that teach skills in various core technology tracks that helps you to excel in the industry’s latest trends. These courses are created and taught by experts and feature hands-on labs and engaging communities.

Azure IoT reference architecture diagram

Benefits

A T Kearney: The potential for the Internet of Things (IoT) is enormous. It is projected that the world will reach 26 billion connected devices by 2020, with incremental revenue potential of $300 billion in services.

McKinsey & Co: The Internet of Things (IoT) offers a potential economic impact of $4 trillion to $11 trillion a year by 2025.

For a partner having these certified people can serve their customer whereas the developer can work on these projects and explore this new area.

MPP vs Microsoft Certification

The professional programs help us in gaining technical job-ready skills and get real-world experience through online courses, hands-on labs, and expert instruction within a specific time period. It is a good starting point to get your hands dirty with the technologies by learning via practical work, rather than classic certification style-based learning of reading a book. In MPP you be asked questions during the modules and you must complete all labs ready for the module exam where you will have to setup a solution from scratch and if your solution is correct only then will your answers be correct.

This program consists of eight different courses.

  • Getting Started with IoT
    This is a basic generic IoT course that provides an idea about IoT eco system or broad Perspective about IoT, it covers the concepts and patterns of an IoT solution and can be used to support business needs in industries like Manufacturing, Smart City/Building, Energy, Healthcare, Retail, and Transportation components of an IoT Architecture
  • Program Embedded Device Hardware
    Here you will learn the basics for programming resource-constrained devices. In addition to that you will get some programming best practices that can be applied when working with embedded devices and you will get practice developing code that interacts with hardware, SDKS, devices that connect to various kinds of sensors.
  • Implement IoT Device Communication
    Explains the Cloud Gateway or Azure IoT hub which helps in, Connecting and Manage IoT devices also helps in configuring them for secure cloud communication. Azure IoT hub helps in implementing secure 2-way communication between devices and the cloud, provision simulated devices using client tools such as Azure CLI and preforming management tasks while examining aspects of device security, Device Provisioning Service and how to provision devices at scale.
  • Analyze and Store IoT Data
    Analyzing data, how to store it and able to configure the latest tools and implement data analytics and storage requirements for IoT solutions. Explains the concepts related to cold storage and set up Azure Data Lake for cold storage. Analysis and concepts for warm storage. Using Azure Cosmos DB as an endpoint to receive data from Azure Stream Analytics jobs and analytic capabilities of the Azure edge runtime. Set up stream analytics, to run on a simulated edge device, stream analytics querying, routing and analysis capabilities.
  • Visualize and Interpret IoT Data
    In this course we can explore Time Series Insights. Real-Time Streaming, Predictive models data visualization tools, how to build visualizations with Time Series Insights, and how to create secure Power BI Service Dashboards for businesses characteristics of time series data – how it can be used for analysis and prediction. How IoT telemetry data is typically generated as time series data and techniques for managing and analyzing it with Azure Time Series Insights. Store, analyse and instantly query massive amounts of time series data. General introduction to using Power BI, with specific emphasis on how Power BI can load, transform and visualize IoT data sets.
  • Implement Predictive Analytics using IoT Data
    Predictive Analytics for IoT Solutions through a series of machine learning implementations that are common for IoT scenarios, such as predictive maintenance.
    Helps in describing machine learning scenarios and algorithms commonly pertinent to IoT, how to use the IoT solution Accelerator for Predictive Maintenance, preparing data for machine learning operations and analysis, apply feature engineering within the analysis process, choosing the appropriate machine learning algorithms for given business scenarios. Identify target variables based on the type of machine learning algorithm, evaluate the effectiveness of regression models.
  • Evaluate and Design an IoT Solution
    Learn to develop business planning documents and the solution architecture for their IoT implementations. To build massively scalable Internet of Things solutions in an enterprise environment, it is essential to have tools and services that can securely manage thousands to millions of devices while at the same time providing the back-end resources required to produce useful data insights and support for the business. Azure IoT services provide the scalability, reliability and security, as well as a host of functions to support IoT solutions in any of the vertical marketplaces (industrial/manufacturing, smart city/building, energy, agriculture, retail, etc.). In IoT Architecture Design and Business Planning, you will be presented with instruction that will document approaches to design, propose, deploy, and operate an IoT Architecture.
  • Final Project
    In this Project you should feel confident to start an IoT architect career and ability to design and implement a full IoT solution. In this project, you will get evaluated on the knowledge and skills that you acquired by completing the other IoT courses. Instead of learning new skills, we get assessed what we know with the emphasis placed on the hands-on activities. To leverage a real-world project scenario that enables us to verify that you have the skills required to implement an Azure IoT solution. The challenge activities section includes a series of tasks where you must design and implement an Azure IoT solution that encompasses many of the technologies. You will need to apply many different aspects of the training within your solution in order to be successful.

Real-World Scenario

Consider simulating a weather station located at some remote location. The station will send telemetry data to the cloud, where the data will be stored for long-term analysis and also monitored in real-time to ensure the wind speed does not exceed safe limits also unsafe wind speeds be detected, the solution will initiate an action that in the real-world would send alert notifications and ensure the wind farm turbines apply rotor brakes to ensure the turbines do not over-rev.
The functional requirements, constraints and the Proof of Value must satisfy.

  • Every turbine in the simulated farm will leverage several sensors to provide the telemetry information relating to turbine performance and will connect directly (and securely) to a network that provides access to the internet.
  • Demonstrate the use of Time Series Insights to view the wind turbine telemetry data.
  • Route telemetry to storage appropriate for high-speed access for Business Intelligence.
  • Create a dashboard in Power BI desktop that displays telemetry as lines charts and gauges.

Time Series Insight Graph

Graph Demo 1

Graph Demo 2

Graph Demo 3

Azure Arc is one of the significant announcements coming out of #msignite this week. As depicted in the picture below, Azure Arc is a single control plane across multiple clouds, premises, and the edge.

Azure Arc

Source: https://azure.microsoft.com/en-us/services/azure-arc/

But we’ve seen single control planes before, no?

That is correct. The following snapshot (from 2013) shows App Controller securely connected to both on-premise and Microsoft Azure resources.

Azure App Controller in 2013

Source: https://blogs.technet.microsoft.com/yungchou/2013/02/18/system-center-2012-r2-explained-app-controller-as-a-single-pane-of-glass-for-cloud-management-a-primer/

So, what is different with Azure Arc?

Azure Arc is not just a “single-pane” of control for cloud and on-premises. Azure Arc takes Azure’s all-important control plane – namely, the Azure Resource Manager (ARM) – and extends it *outside* of Azure. In order to understand the implication of the last statement, it will help to go over a few ARM terms.

Let us start with the diagram below. ARM (shown in green) is the service used to provision resources in Azure (via the portal, Azure CLI, Terraform, etc.). A resource can be anything you provision inside an Azure subscription. For example, SQL Database, Web App, Storage Account, Redis Cache, and Virtual Machine. Resources always belong to a Resource Group. Each type of resource (VM, Web App) is provisioned and managed by a Resource Provider (RP). There are close to two hundred RPs within the Azure platform today (and growing with the release of each new service).

ARM

Source: http://rickrainey.com/2016/01/19/an-introduction-to-the-azure-resource-manager-arm/

Now that we understand the key terms associated with ARM, let us return to Azure Arc. Azure Arc takes the notion of the RP and extends it to resources *outside* of Azure. Azure Arc introduces a new RP called “Hybrid Compute”. See the details for the RP HybridCompute in the screenshot below. As you can imagine, the HybridCompute RP is responsible for managing the resources *outside* of Azure. HybridCompute RP manages the external resources by connecting to the Azure Arc agent, deployed to the external VM. The current preview is limited to Windows or Linux VM. In the future, the Azure Arc team plans to support containers as well.

RP Hybrid Compute Screenshot

Note: You will need to first to register the provider using the command az register -n Microsoft.HybridCompute

Once we deploy the Azure Arc agent [1] to a VM running in Google Cloud, it shows inside Azure Portal within the resource group “az_arc_rg” (see screenshot below). Azure Arc agent requires connectivity to Azure Arc service endpoints for this setup to work. All connections are outbound from the agent to Azure and are secured with SSL. All traffic can be routed via an HTTPS proxy.

deploy the Azure Arc agent [1] to a VM running in Google cloud

Since the Google Cloud hosted VM (gcp-vm-001) is an ARM resource, it is an object inside Azure AD. Furthermore, there can be a managed identity associated with Google VM.

Benefits of Extending ARM to Resources Outside Azure:

  • Ability to manage external VMs as ARM resources via using Azure Portal / CLI, as well as, the ability to add tags, as shown below.Ability to manage external VMs as ARM resources using Azure Portal
  • Ability to centrally manage access and security policies for external resources with Role-Based Access Control.manage access and security policies for external resources with Role-Based Access Control
    Microsoft Hybrid Compute Permissions
  • Ability to enforce compliance and simplify audit reporting.Ability to enforce compliance and simplify audit reporting

[1] Azure Arc Agent is installed by running the following script on the remote VM. This script is generated from the Azure portal:

# Download the package:

Invoke-WebRequest -Uri https://aka.ms/AzureConnectedMachineAgent -OutFile AzureConnectedMachineAgent.msi

# Install the package:

msiexec /i AzureConnectedMachineAgent.msi /l*v installationlog.txt /qn | Out-String

# Run connect command:

"$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "az_arc_rg" --tenant-id "" --location "westus2" --subscription-id ""
This blog is a follow on about Azure Cognitive Services, Microsoft’s offering for enabling artificial intelligence (AI) applications in daily life. The offering is a collection of AI services with capabilities around speech, vision, search, language, and decision.

In Azure Cognitive Services Personalizer: Part One, we discussed the core concepts and architecture of Azure Personalizer Service, Feature Engineering, its relevance, and its importance.

In this blog, Part Two, we will go over a couple of use cases in which Azure Personalizer Service is implemented. We will look at features used, reward calculation, and their test run result. Stay tuned for Part Three, where we will list out recommendations and capacities for implementing solutions using Azure Personalizer Service.

Use Cases and Results

Two use cases implemented using Personalizer involves the ranking of content for each user of a business application.

Use Case 1: Dropdown Options

Different users of an application with manager privileges would see a list of reports that they can run. Before Personalizer was implemented, the list of dozens of reports was displayed in alphabetical order, requiring most of the managers to scroll through the lengthy list to find the report they needed. This created a poor user experience for daily users of the reporting system, making for a good use case for Personalizer. The tooling learned from the user behavior and began to rank frequently run reports on the top of the dropdown list. Frequently run reports would be different for different users, and would change over time for each manager as they get assigned to different projects. This is exactly the situation where Personalizer’s reward score-based learning models come into play.

Context Features

In our use case of dropdown options, the context features JSON is as below with sample data

{
    "contextFeatures": [
        { 
            "user": {
                "id":"user-2"
            }
        },
        {
            "scenario": {
                "type": "Report",
                "name": "SummaryReport",
                "day": "weekend",
                "timezone": "est"
            }
        },
        {
            "device": {
                "mobile":false,
                "Windows":true,
                "screensize": [1680,1050]
            }
        }
    ]
}

Actions (Items) Features

Actions were defined as the following JSON object (with sample data) for this use case

{
    "actions": [
    {
        "id": "Project-1",
        "features": [
          {
              "clientName": "Client-1",
              "projectManagerName": "Manager-2"
          },
          {

                "userLastLoggedDaysAgo": 5
          },
          {
              "billable": true,
              "common": false
          }
        ]
    },
    {
         "id": "Project-2",
         "features": [
          {
              "clientName": "Client-2",
              "projectManagerName": "Manager-1"
          },
          {

              "userLastLoggedDaysAgo": 3
           },
           {
              "billable": true,
              "common": true
           }
        ]
    }
  ]
}

Reward Score Calculation

Reward score was calculated based on the actual report selected (from the dropdown list) by the user from the ranked list of reports displayed with the following calculation:

  • If the user selected the 1st report from the ranked list, then reward score of 1
  • If the user selected the 2nd report from the ranked list, then reward score of 0.5
  • If the user selected the 3rd report from the ranked list, then reward score of 0
  • If the user selected the 4th report from the ranked list, then reward score of – 0.5
  • If the user selected the 5th report or above from the ranked list, then reward score of -1

Results

View of the alphabetically ordered report names in the dropdown before personalization:

alphabetically ordered report names in the dropdown before personalization

View of the Personalizer ranked report names in the dropdown for the given user:

Azure Personalizer ranked report names based on frequency

Use Case 2: Projects in Timesheet

Every employee in the company logs a daily timesheet listing all of the projects the user is assigned to. It also lists other projects, such as overhead. Depending upon the employee project allocations, his or her timesheet table could have few to a couple of dozen active projects listed. Even though the employee is assigned to several projects, particularly at lead and manager levels, they don’t log time in more than 2 to 3 projects for a few weeks to months.

Before personalization, the projects in the timesheet table were listed in alphabetical order, again resulting in a poor user experience. Even more troublesome, frequent user errors caused the accidental logging of time in the incorrect row. Personalizer was a good fit for this use case as well, allowing the system to rank projects in the timesheet table based on time logging patterns for each user.

Context Features

For the Timesheet use case, context features JSON object is defined as below (with sample data):

{
    "contextFeatures": [
        { 
            "user": {
                "loginid":"user-1",
                "managerid":"manager-1"
		  
            }
        },
        {
            "scenario": {
                "type": "Timesheet",
                "day": "weekday",
                "timezone": "ist"
            }
        },
        {
            "device": {
                "mobile":true,
                "Windows":true,
                "screensize": [1680,1050]
            }
        }
     ]
}

Actions (Items) Features

For the timesheet use case, the Actions JSON object structure (with sample data) is as under:

{
    "actions": [
    {
        "id": "Project-1",
        "features": [
          {
              "clientName": "Client-1",
              "userAssignedForWeeks": "4-8"
          },
          {

              "TimeLoggedOnProjectDaysAgo": 3
          },
          {
              "billable": true,
              "common": false
          }
        ]
    },
    {
         "id": "Project-2",
         "features": [
          {
              "clientName": "Client-2",
              "userAssignedForWeeks": "8-16"
          },
          {

              " TimeLoggedOnProjectDaysAgo": 2
           },
           {
              "billable": true,
              "common": true
           }
        ]
    }
  ]
}

Reward Score Calculation

The reward score for this use case was calculated based on the proximity between the ranking of projects in timesheet returned by the Personalizer and the actual projects that the user would log time as follows:

  • Time logged in the 1st row of the ranked timesheet table, then reward score of 1
  • Time logged in the 2nd row of the ranked timesheet table, then reward score of 0.6
  • Time logged in the 3rd row of the ranked timesheet table, then reward score of 0.4
  • Time logged in the 4th row of the ranked timesheet table, then reward score of 0.2
  • Time logged in the 5th row of the ranked timesheet table, then reward score of 0
  • Time logged in the 6th row of the ranked timesheet table, then reward score of -0.5
  • Time logged in the 7th row or above of the ranked timesheet table, then reward score of -1

The above approach to reward score calculation considers that most of the time users would not need to fill out their timesheet for more than 5 projects at a given time. Hence, when a user logs time against multiple projects, the score can be added up and then capped between 1 to -1 while calling Personalizer Rewards API.

Results

View of the timesheet table having project names alphabetically ordered before personalization:

project names alphabetically ordered before Azure personalization

View of the timesheet table where project names are ordered based on ranking returned by Personalization Service:

timesheet table ordered by Azure Personalization Service

Testing

In order to verify the results of implementing the Personalizer in our selected use cases, unit tests were effective. This method was helpful in two important aspects:

  1. Injecting the large number of user interactions (learning loops)
  2. In simulating the user behavior towards a specific pattern

This provided an easy way to verify how Personalizer reflects the current and changing trends injected via Unit Tests in the user behavior by using reward scores and exploration capability. This also enabled us to test different configuration settings provided by Personalizer Service.

Test Run 1

This 1st test run simulated different user choices with different explorations settings. The test results show the number of learning loops that started reflecting the user preference from intermittent to a consistent point.

Unit Test Scenario
Learning Loops, Results and Exploration Setting
User selection of Project-A Personalizer Service started ranking Project-A at the top intermittently after 10 – 20 learning loops and ranked it consistently at the top after 100 learning loops with exploration set to 0%
User selection of Project-B Personalizer Service started reflecting the change in user preference (from Project-A to Project-B) by ranking Project-B at the top intermittently after 100 learning loops and ranked it consistently at the top after 1200 learning loops with exploration set to 0%
User selection of Project-C

 

Personalizer Service started reflecting the change in user preference (from Project-B to Project-C) by ranking Project-C at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 150 learning loops with exploration set to 50%

 

Personalizer adjusted with the new user preference quicker when exploration was utilized.

 

User selection of Project-D

 

Personalizer Service started reflecting the change in user preference (from Project-C to Project-D) by ranking Project-D at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 120 learning loops with exploration set to 50%

 

Test Run 2

In this 2nd test run, the impact of having and removing sparse features (little effective features) is observed.

Unit Test Scenario
Learning Loops, Results and Exploration Setting
User selection of Project-E Personalizer Service started reflecting the change in user preference (from Project-D to Project-E) by ranking Project-E at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 150 learning loops with exploration set to 20%
User selection of Project-F Personalizer Service started reflecting the change in user preference (from Project-E to Project-F) by ranking Project-F at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 250 learning loops with exploration set to 20%
User selection of Project-G Two less effective features (sparse features) of type datetime were removed. Personalizer Service started reflecting the change in user preference (from Project-F to Project-G) by ranking Project-G at the top intermittently after 5 – 10 learning loops and ranked it almost consistently at the top after only 20 learning loops with exploration set to 20%

 

User selection of Project-H

 

Two datetime sparse features were added back. Personalizer Service started reflecting the change in user preference (from Project-G to Project-H) by ranking Project-H at the top intermittently after 10 – 20 learning loops and ranked it almost consistently at the top after 500 learning loops with exploration set to 20%

 

Thanks for reading! In the next part of this blog post, we will look at the best practices and recommendations for implementing Personalizer solutions. We will also touch upon the capacities and limits of the Personalizer service at present.