This blog post is for all developers of all levels that are looking for ways to improve efficiency and time-saving ideas. It begins by providing some background on me and how my experience with Microsoft Excel has evolved and aided me as a developer. Next, we cover a scenario where Excel can be leveraged to save time. Finally, we go over a step-by-step example using Excel to solve the problem.

Background

As a teenager growing up in the 80s, I was fortunate enough to have access to a computer. One of my favorite applications to use as a kid was Microsoft Excel. With Excel, I was able to create a budget and a paycheck calculator to determine my meager earnings from my fast food job. As my career grew into software development, leveraging all of the tools at my disposal as a solution against repetitive and mundane tasks made me more efficient. Over the years, colleagues have seen solutions I have used and have asked me to share how I came up with and implemented them. In this two-part blog post, I will share the techniques that I have used to generate C#, XML, JSON, and more. I will use data-loading in Microsoft Power Apps and Dynamics as a real-word example; however, we will need to start with the basics.

The Basics

Before going into the data-loading example, I wanted to provide a very simple example. Keep in mind that there may be more effective solutions to this specific example that do not use Excel; however, I am using it to illustrate this simple example. Let’s say you had a data model and a contact model that, for the most part, were the same with the exception of some property names, and you needed to write methods to map them. You know the drill:

var contact = new Contact();
contact.FirstName = datamodel.firstName;
contact.LastName = datamodel.lastName;
contact.PhoneNumber = datamodel.phoneNumber;
contact.CellPhone = datamodel.mobileNumber;

Not a big deal, right? Now let’s say you have a hundred of these to do and each model may possibly have 50+ properties! This would very quickly turn into a time consuming and mundane task; not to mention you would likely make a typo along the way that another developer would be sure to let you know about in the next code review. Let us see how Excel could help in this situation.

In this scenario, the first thing you will need is the row data for the contact and data models. One way would be using the properties. Consider the classes below:

Use Properties to Identify Classes

  1. Create 3 Excel worksheets called Primary, Secondary, and Generator
  2. Copy/paste the property statements from Contact into Primary worksheet and ContactDataModel into a Secondary worksheet.
  3. Select Column A in the Primary worksheet
    Create three Excel Worksheets
  4. In Excel, select the Data tab and then Text to Columns
  5. Choose Delimited, then Next
    Choose Delimited
  6. Uncheck all boxes and then check the Space checkbox, then Finish
    Uncheck All Boxes
  7. Your worksheet should look like the following:
    Sample of Worksheet
  8. Repeat 3-7 with the Secondary worksheet
  9. Select cell A1 and then press the = key
  10. Select the Primary worksheet and then cell D1
  11. Press the Enter key, you should return to the Generator worksheet and the text “FirstName” should be in cell A1
  12. Select cell B1 and then press the = key
  13. Select the Secondary worksheet and then cell D1
  14. Press the Enter key, you should return to the Generator worksheet and the text “firstName” should be in cell A1
  15. Drag and select A1:B1. Click the little square in the lower-right corner of your selection and drag it down to row 25 or so. (Note: you would need to keep dragging these cells down is you added more classes.)
    You will notice that by dragging the cells down, it incremented the rows in the formula.
    Incremented Rows in the Formula
    Press CTRL+~ to switch back to values.
  16. Select cell C1 and enter the following formula:
    =IF(A1=0,””,A1 & “=” &B1&”;”)
    As a developer, you probably already understand this, but the if statement is checking to see if A1 has a value of 0 and simply returns an empty string if so. Otherwise, string concatenation is built.
  17. Similar to an earlier step, select cell C1 and drag the formula down to row 25. Your worksheet should look like:
    Select and Drag Formula
  18. You can now copy/paste the values in column C into the code:
    Copy and Paste Values into Column C

As you continue on, Excel keeps track of the most recent Text to Columns settings used; so, if you pasted another set into the Primary and Secondary worksheets, you should be able to skip steps 1-5 for remaining classes. In the sample class file and workbook, I have included Address models as an illustration.

Next Steps

This example has covered the basic concepts of code generation with Microsoft Excel: extracting your data and writing the formulas that generate the necessary code. Depending on what you are trying to accomplish, these requirements may grow in complexity. Be sure to consider the time investment and payoff of using code generation and use where it makes sense. One such investment that has paid off for me is data loading in Microsoft Power Apps which we will cover in the next post: Code Generation with Microsoft Excel: A data-loading exercise in Microsoft Power Apps.

Download Example Workbook

Download Address Models

Overnight the world’s workforce has moved into the home office. As a result, online meetings are now the only way we meet. For many organizations, this sudden change has dramatically impacted how their business operates. Staff members were accustomed to collaborating in person. Now they feel disconnected.  Sales people who relied on face-to-face interaction to close the deal are suddenly isolated.  Visual cues, facial expressions and the non-verbal communication we took for granted is gone. Right now, across the globe, millions of professionals are facing these new communication anxieties.

Microsoft Teams has many features that will help overcome these challenges.  Team chats, channels and document collaboration can empower your online meetings.  These ten tips will help you ensure that all participants are comfortable with the technology so they can focus on the important part of the meeting – the people and content.

#1 – Set Expectations in the Meeting Invite

The meeting invite is a great opportunity to let people know what to expect by stating the purpose and agenda items. This will make the meeting meaningful and keep everyone on track. Will documents or other visuals be shared? Does your organization endorse the use of web cameras? Will notes or a recording be available for people to refer to after the meeting? Keep the message short but if there is anything you need to point out to ensure people can participate fully, the invite is a good opportunity to do that.

If you’re expecting newcomers, you might include a little extra guidance about accessing the meeting or where questions can be addressed. Although most people will not need this, including it sets a welcoming tone. For example:

New to using Microsoft Teams?

For the best experience:

  1. Click Join Microsoft Teams Meeting
  2. You may be prompted to choose how you want to connect – browser or desktop app
  3. Select audio option – computer or phone

If you need to join with only audio, you can use the phone number provided, or select another number through Local numbers.

If you’re dialing in, you can press *1 during the meeting to hear menu options.

If you need any help, click on Learn more about Teams or contact me at first.last@email.com

#2 – Review Meeting Controls with Newcomers

If you plan to have people join who are completely new to Teams, take a minute or two to review the meeting controls so people can participate comfortably. If you’re going to invite people to turn on their webcams, this is a great opportunity to illustrate the Start with a Blurred background option (see Tip #5).

Microsoft Teams App Bar Explained

  1. Meeting duration
  2. Webcam on or off
  3. Mute or unmute yourself
  4. Screen sharing
  5. More options
  6. Conversations
  7. Participant list
  8. Leave meeting

#3 – Mute is Everyone’s Friend

In meetings with more than 5 people, anyone who joins after the meeting has started will join as muted to reduce noise. If you notice disruptive noise from others, you can mute that person or mute all, easily from the participant list.

Mute People in Microsoft Teams or Mute All

If someone has been muted, they’ll get a notification letting them know. They can unmute themselves using their meeting controls when they need to speak. For those joining by calling in, *6 mute/unmutes.

#4 –Joining from Multiple Devices? Avoid Echoing.

Sometimes, people will join the meeting with their computer and then dial into the meeting with their phone for audio. To avoid an echo, just make sure your computer speaker is muted. There is an option to do this on the join screen prior to entering the meeting. If you forget, just turn off your computer sound. 

Mute or Blur Your Background in Microsoft Teams App

#5 – “Mute” Any Distracting Backgrounds

If you need to share your webcam but the background could be distracting, you can take advantage of the select Start video with blur in More Options. This blurs the background behind the person for the duration of the camera share.

#6 – Pick What to Share

Don’t want everyone to read your email when you share your desktop? You have multiple choices to share:

  • Your entire desktop or screen
  • A window or application
  • PowerPoint
  • Whiteboard

With application sharing, participants will only see the application or window that you choose to share. They will not see any other application or notifications you might receive.

#7 – Let a Coworker Control Your Screen

A coworker can request control when you are sharing your desktop so that he or she can control the screen and cursor. If you choose to share an application, like PowerPoint, rather than your desktop, control would be limited to the shared application. For security reasons, external participants cannot request control when you are sharing your desktop.

#8 – Take Notes in the Meeting

With Microsoft Teams, taking and sharing meetings notes is easy. Notes can be accessed from More Options and are available before, during, and after the meeting.

Taking Notes in a Microsoft Teams Meeting

#9 – Two ways to collaborate on documents

You can work on files together through a screen share, where one person types and the others talk. Or, you can upload the document to the meeting chat and allow multiple people to work on the document in real-time.

#10 – Take Advantage of the Resources Available

Here are some good articles from the Microsoft Blog on Remote Work and Teams Meetings:

Thanks to authors Tacy Holliday, Chris Miller, and Guy Schmidt for their contributions to this blog.

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for how to anonymize or customize the data that is being sent to Application Insights. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series.

Two things to consider with telemetry:

  1. Adding additional telemetry that is not recorded by Application Insights.
  2. Removing or anonymizing data that is already being sent to Application Insights.

There will a later post in this series discussing how to add new telemetry, this post focuses on anonymizing or removing data.

Personally Identifiable Information (PII) Already in Application Insights Account

We’ll start with a likely scenario, during an audit or during testing you discovered that you are logging PII to your Application Insights account. How can you go about fixing that? The short answer is to delete the entire Application Insights resource. That means you’ll lose access to all historical telemetry that was in the account and your production system will no longer be logging telemetry anywhere unless you create a new account and update your production system with the new telemetry key. However, this does solve your immediate problem of retaining PII. See the Microsoft documentation, for details on what is captured by Application Insights, how it’s transmitted and stored.

Application Insights does provide a PURGE endpoint, but requests are not timely, the endpoint isn’t well documented and it will not properly update metrics to account for the data that was purged. In general, if you have a compliance concern, delete the Application Insights account. Remember, Application Insights is designed to be a highly available high-performance telemetry platform, which means it is designing around being an append-only system. The best solution is simply not to send data to the platform that you shouldn’t.

API Use Case

Think of an API your business may have built. This API allows us to search for customers by e-mail to find their customer id. Once we have the customer id, we can make updates to their record such as their first name, birthday, etc. By default, Application Insights records the request URL and the response code. By default, it does NOT record any HTTP headers or the request body or the response body. First, let’s think of how we might design the search endpoint, we have two approaches:

  1. GET /api/customer/search?emailAddress=test@appliedis.com
  2. POST /api/customer/search
    a. In the request body, send JSON:
    { “emailAddress”: “test@appliedis.com”}

If we design the API using approach #1, we will be sending PII to Application Insights by default. However, if we designed the API using approach #2, by default no PII would be sent to Application Insights. Always try and keep your URLs free of any PII.

That may not always be possible, so let’s look at another use case. Let’s say we have the primary key of a customer record and we want to view and make edits to that record, what would the endpoints look like:

  1. GET /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6
  2. PUT /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6

Now, depending on your regulatory environment logging these URLs to Application Insights might present a problem. Notice we are not logging e-mail addresses, phone numbers or names; we are logging behavior about an individual. Pay attention to when the site was accessed and when was their profile updated? To avoid this we would like to anonymize the URL data that is being sent to Application Insights.

Anonymize Data Sent to Application Insights

This section assumes you are using ASP.NET Core and have already configured Application Insights, see my previous blog post for details.  Also, if you need to troubleshoot your configuration or need to verify it’s working as expected, please see my other blog post for details.

The Application Insights NuGet package provides an interface for exactly this purpose called ITelemetryProcessor. You simply need to subclass it and implement the Process method. The Telemetry Processor implementation acts much like ASP.NET Core middleware, in that there is a chain of telemetry processors. You must provide a constructor in your implementation that accepts an ITelemetryProcessor which is next in the chain. In your process method, you are then responsible for calling onto the next processor in the chain. The last processors in the chain are the ones provided by the NuGet package that implements the same interface and sends the telemetry over the wire to the actual Application Insights service in Azure. In the Process method which you are required to implement, you receive a single argument, ITelemetry. You can cast that to one of the subclasses, e.g. DependencyTelemetry, RequestTelemetry, etc. In the Process method, you can then mutate the telemetry in whatever way you need to, e.g. to anonymize data. You’ll then be responsible for calling the Process method on the next telemetry processor in the chain, e.g. the one that was provided to the constructor of your class. If you want the given telemetry item to never be sent to Application Insights, simply omit the call to the Process method of the next telemetry processor in the chain.

Now we will look at the source code for one that does what we are proposing. This will be for anything that resembles a customer id in the URL of RequestTelemetry and then replaces it with the word “customerid”.
RequestTelemetry

As seen above, in the constructor we receive the next telemetry processor in the chain. In the process method, we check to see if we have RequestTelemetry, e.g. ignoring all other telemetry types, like TraceTelemetry or DependencyTelemetry. The RequestTelemetry has a .Name and .Url property both of which might contain details about the URL which contains our PII. We use a regular expression to see if either contains a customer id, if so, we replace it with the word “customerid”. And then we always ensure that we call the next telemetry processor in the chain so that the modified telemetry is sent to Application Insights.
Remember, just having this ITelemetryProcessor coded won’t do anything. We still need to register it with the Application Insights library. In the Startup.cs, you will need to add a line to the ConfigureServices method, see below:

Add a Line to the ConfigureServices Method

Now that you’ve registered the processor, it’s just a matter of deploying the code. If you wanted to debug this locally while still sending telemetry live to Application Insights in Azure, please see my blog post with details on how to do that.

Summary

You learned about the ITelemetryProcessor extension point in the Application Insights NuGet package and how to use that extension point to prevent PII data from being logged to your Application Insights account. We’ve discussed how to design your API endpoints efficiently by default so that hopefully you don’t need to customize your telemetry configuration. Lastly, I have shown you how to delete PII data that may have been logged accidentally to your production Application Insights account. You may also want to take advantage of a relatively new feature of Application Insights to set the retention period of logs. Previously it was only 90 days, but now you can configure it in as low as 30 days. This may be useful for handling forgotten requests if your logging system is storing PII and you need to ensure it is removed within 30 days of being requested. Next, in this blog series, we will discuss logging additional telemetry to create an overall better picture of your system’s health and performance.

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for a quick way to troubleshoot telemetry configuration. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series. How do you go about testing your telemetry configuration? Typically, developers would adjust the application and would then deploy to Azure, ideally your development environment in Azure. However, making a change to an application, building, publishing to Azure and testing the given endpoints. Waiting for telemetry to appear in Application Insights can require upwards of 30 minutes per change assuming that you know what you are doing and without making any mistakes. Is there a way to work locally?

Troubleshooting Locally

Let me start by discussing the end goal. We want to simulate our production environment while running locally on our dev machine. Azure typically provides local emulators for services like storage and Cosmos. They do not provide an Application Insights emulator. When running locally we want to simulate production, but we will need to send data to an actual Azure Application Insights account. In order to simulate running in production, we should publish our Dotnet application in Release mode and begin outside of Visual Studio. The reason for starting the application outside Visual Studio is that our production environment will not have Visual Studio installed. Another reason for starting the application outside Visual Studio is because Visual Studio includes a Diagnostic panel that captures the Application Insights telemetry and prevents it from being sent to the Azure Application Insights account. I’d like to emphasize that the Diagnostics panel built into Visual Studio is not an emulator and shouldn’t be used for that purpose.

First, we must publish the application in Release mode. You can do that using the dotnet command-line as shown below.

Publish in Release Mode

This will publish to a directory similar to the below,

Publish to Directory

Once in the directory where the build artifacts are, we should find both appsettings.json and the .dll for our main application, CustomerApi.dll in my case. From the command-line. We can then run Kestrel directly, using the following command-line.

Run Kestrel Directly

If using the defaults, your application will now be running and available in a browser at either http://localhost:5000/ or https://localhost:5001/. We are likely still missing one step, which is configuring the telemetry key for Application Insights. In the bin\Release\netcoreapp3.0\ folder, locate the appsettings.json. Open the file and put the telemetry key in the file.
configuring the telemetry key

If you go back to the command-line you can press Ctrl+C to exit the running web application and then re-run the dotnet CustomerApi.dll command to restart the application. We now have an application running locally that is sending telemetry to Application Insights in Azure.

View Live Telemetry from Local Machine

In the Azure portal, open the Application Insights resource and then locate the “Live Metrics Stream” blade.
Live Metrics Stream

The Live Metrics panel should open and connect as long as the application is running locally using “dotnet CustomerApi.dll”. Once open, scroll to the bottom of the pane.

Bottom of Pane

At the bottom, you will see a list of connected servers. In my example below, you see two servers. The one highlighted in red is my local developer machine. The other server is the Azure Application Service that I have running in my development environment in Azure.

Developer and Azure Application Server

To quickly recap we have our application running locally outside Visual Studio on a command-line and in Azure Application Insights, we can see our local machine is connected up to live metrics. In order to actually see telemetry flow into this panel, you will likely want to make one other change. In the upper-right, click on the filter icon to adjust the live metrics filters.

Telemetry Sample

You will then be prompted with the following dialog. If you trust the servers you can safely ignore.

Authorize Connected Servers

You will then see a dialog with the current filters. Notice that the default configuration will only show failed requests and dependency calls. Since we are troubleshooting it’s likely you will want to see all requests. Feel free to click the “x” to remove both filters and then the “Save” button.

Current Filter Dialogue

Once you have completed this step, you can go back to your web browser on your local machine, either http://localhost:5000/ or https://localhost:5001/ and then make a request to your API. I tried a URL I know returns a 404 response. You can see the live telemetry that showed up for me:

Sample Telementry

Then, click on that row for more details about the given telemetry item. This telemetry is also being logged to Application Insights and you will be able to see it on all the usual dashboards and search for it using Log Analytics query, just be aware there is still the typical 2 to 5-minute delay between when it is sent and when it will appear in queries and dashboards.

Summary

You have now learned how to troubleshoot Azure Application Insights quickly and without needing to deploy your application to Azure. To summarize, you run “dotnet publish” in “Release” mode locally and then run the application from the command-line outside Visual Studio. This is done for a few reasons:

  • When publishing in release mode, you do not need to worry about appsettings.development.json
  • By running outside Visual Studio, you do not need to worry about launchSettings.json setting any special environment variables that don’t match your production environment (e.g. ASPNETCORE_ENVIRONMENT)
  • When running outside Visual Studio, you do not need to worry about the diagnostics panel deciding to capture your telemetry and preventing it from being sent to Azure.

Once your application is running locally and has the Application Insights telemetry key configured properly, you will find the telemetry in the “Live Metrics” view so you can avoid the typical 2 to 5-minute delay between sending telemetry and will see it elsewhere in Application Insights.

If you are concerned that this setup will not allow for the use of the Visual Studio Editor, think again! Once you have the application running outside Visual Studio, simply use the “Attach To Process…” menu item in Visual Studio. This gives you the best of both worlds:

Visual Studio Debugger

Hopefully, this post helped you understand how to more quickly how to troubleshoot your Application Insights telemetry configuration. That will come in handy in the next post for this series, where we talk about customizing telemetry to keep PII (Personally Identifiable Information) out of your application insights logs.

Implementing a cloud strategy and modernization of legacy systems are two of the core objectives of most IT organizations. Purchasing a COTS product or SaaS offering can speed up modernization and come with a lot of advantages. COTS products come with a proven track record and address specific business needs that would be difficult to justify building your own. COTS product shifts the liability of creating features to the COTS product instead of your organization. Finally, COTS products promise a shorter timeframe to implementation. Even though you’ll be purchasing a solution to 80% of your problem with the COTS product, the hardest parts of implementing a COTS product are still your responsibility. Below are some key areas you will still own.

Security and Infrastructure

Security and Infrastructure are your responsibility. Off the shelf product or Software as a Service (SaaS) product won’t address this. If this is a SaaS product, how will your Hybrid network access it and how is that access govern? You’ll need to do a risk assessment of this SaaS product which includes how you connect to it, how it stores its data, and even how it’s creating the software. If this is an off the shelf product, how will it be installed in the cloud? Ask if it can is cloud-native or does it need to run on virtual machines in the cloud. If it’s running on virtual machines are those hardened and who have access to them. Cloud virtual machines complicate networking since they need to be accessed from on-prem and they may still need to reach into the cloud or the internet. That can leave a wide surface vector you’ll need to account for. Security and Infrastructure is the biggest concern and you’ll need to own it.

Automation

One of the promises of moving to the cloud is gaining business agility. Automation is a key component for reaching that goal. SaaS product removes the burden from deploying and maintaining the application, but there may still be a necessity to automate some aspects. For example, if the SaaS product must be configured, it might have a UI and an API for doing this. It’s in your best interest to involve this SaaS Product into your normal development pipeline and consider infrastructure as code as it applies to the SaaS product. If you purchase a COTS produce be sure you can stand up an entire environment including installation of this product with a click of a button. There is no excuse for not automating everything and there are plenty of tools in the Azure DevOps pipeline for integration and automation.

Integration

COTS product provides 80% of the functionality needed, but what about the other 20%? The remaining functionality the product doesn’t provide is likely what differentiates your company from the others. There are always edge cases and custom functionalities that you’ll want, that either the vendor does not provide or it’s expensive for the vendor to implement. You’ll need to bring that work in-house or hire a system integrator to hook things together. Whether it’s a SaaS product or a COTS product, integration to applications that provide the remainder of your functionally is your responsibility as well as understanding how integration will work when purchasing the COTS product. A good COTS product will have a solid HTTP REST API. If the product you’re purchasing doesn’t have a simple API, consider a wrapper to make the integration easier. API Management is an Azure service that can do that translation for you. You might find that you want to combine COTS API to one that makes sense to the rest of the systems. Large COTS products should also support messaging of some kind. Messaging helps build loosely coupled components with high cohesion. COTS products might offer file-based and database integration. However, these should be avoided. Integration of the COTS product is your responsibility and the effort can equal the implementation effort of the COTS product itself.

Conclusion

COTS products can provide great benefits to your company and welcome new functionality quickly. Understand that your IT department will still have to drive the overall architecture and you are responsible for everything around the COTS product. The bulk of this work will fall under Security, Infrastructure, Automation, and Integration. Focus on these concerns and you’ll have a successful implementation.

Note: This blog post is *not* about Kubernetes infrastructure API (an API to provision a Kubernetes cluster). Instead, this post focuses on the idea of Kubernetes as a common infrastructure layer across private and public clouds.

Kubernetes is, of course, well known as the leading open-source system for automating deployment and management of containerized applications. However, its uniform availability, is for the first time, giving customers a “common” infrastructure API across public and private cloud providers. Customers can take their containerized applications, Kubernetes configuration files, and for most parts, move to another cloud platform. All of this without sacrificing the use of cloud provider-specific capabilities, such as storage and networking, that are different across each cloud platform.

At this time, you are probably thinking about tools like Terraform and Pulumi that have focused on abstracting underlying cloud APIs. These tools have indeed enabled a provisioning language that spans across cloud providers. However, as we will see below, Kubernetes “common” construct goes a step further – rather than be limited to a statically defined set of APIs, Kubernetes extensibility allows or extends the API dynamically through the use plugins, described below.

Kubernetes Extensibility via Plugins

Kubernetes plugins are software components that extend and deeply integrate Kubernetes with new kinds of infrastructure resources. Plugins realize interfaces like CSI (Container Storage Interface). CSI defines an interface along with the minimum operational and packaging recommendations for a storage provider (SP) to implement a compatible plugin.

Another example of interfaces includes:

  • Container Network Interface (CNI) – Specifications and libraries for writing plug-ins to configure network connectivity for containers.
  • Container Runtime Interface (CRI) – Specifications and libraries for container runtimes to integrate with kubelet, an agent that runs on each Kubernetes node and is responsible for spawning the containers and maintaining their health.

Interfaces and compliant plug-ins have opened the flood gates to third-party plugins for Kubernetes, giving customers a whole range of options. Let us review a few examples of “common” infrastructure constructs.

Here is a high-level view of how a plugin works in the context of Kubernetes. Instead of modifying the Kubernetes code for each type of hardware or a cloud provider offered service, it’s left to the plugins to encapsulate the knowhow to interact with underlying hardware resources. A plugin can be deployed to a Kubernetes node as shown in the diagram below. It is the kubelet’s responsibility to advertise the capability offered by the plugin(s) to the Kubernetes API service.

Kubernetes Control Panel

“Common” Networking Construct

Consider a networking resource of type load balancer. As you would expect, provisioning a load balancer in Azure versus AWS is different.

Here is a CLI for provisioning ILB in Azure:

CLI for provisioning ILB in Azure

Likewise, here is a CLI for provisioning ILB in AWS:

CLI for provisioning ILB in AWS

Kubernetes, based on the network plugin model, gives us a “common” construct for provisioning the ILB that is independent of the cloud provider syntax.

apiVersion: V1

“Common” Storage Construct

Now let us consider a storage resource type. As you would expect, provisioning a storage volume in Azure versus Google is different.

Here is a CLI for provisioning a disk in Azure:

CLI for provisioning a disk in Azure

Here is a CLI for provisioning a persistent disk in Google:

CLI for provisioning a persistent disk in Google

Once again, under the plugin (device) model, Kubernetes gives us a “common” construct for provisioning storage that is independent of the cloud provider syntax.

In the example below, of “common” storage construct across cloud providers. In this example, a claim for a persistent volume of size 1Gi and access mode “ReadWriteOnce” is being made. Additionally, storage class “cloud-storage” is associated with the request. As we will see next, the persistent volume claims decouple us from the underlying storage mechanism.

cloud-storage-claim

The StorageClass determines which storage plugin gets invoked to support the persistent volume claim. In the first example below, StorageClass represents the Azure Disk plugin. In the second example below, StorageClass represents the Google Compute Engine (GCE) Persistent Disk.

StorageClass

StorageClass Diagram 2

“Common” Compute Construct

Finally, let us consider a compute resource type. As you would expect, provisioning a compute resource in Azure versus GCE is different.

Here is a CLI for provisioning a GPU VM in Azure:

CLI for provisioning a GPU VM in Azure

Here is a CLI for provisioning a GPU in Google Cloud:

CLI for provisioning a GPU in Google Cloud:

Once again, under the plugin (device) model, Kubernetes, gives us a “common” compute construct across cloud providers. In this example below, we are requesting a compute resource of type GPU. An underlying plugin (Nvidia) installed on the Kubernetes node is responsible for provisioning the requisite compute resource.

requesting a compute resource of type GPU

Source: https://docs.microsoft.com/en-us/azure/aks/gpu-cluster

Summary

As you can see from the examples discussed in this post, Kubernetes is becoming a “common” infrastructure API across private, public, hybrid, and multi-cloud setups. Even traditional “infrastructure as code” tools like Terraform are building on top of Kubernetes.

ACA Compliance Group needed help streamlining the communications landscape and its fast-growing workforce to collaborate more effectively. AIS recommended starting small with Microsoft Teams adoption and utilizing Microsoft Planner to gain advocates, realize quick wins, and gather insights to guide the larger rollout.

Starting Their Cloud Transformation Journey

The cloud brings many advantages to both companies and their employees, including unlimited access and seamless collaboration. However, to unleash the full power of cloud-based collaboration, a company must select the right collaboration technology that fits their business needs and ensures employees adopt the technology and changes in practices and processes. This ultimately benefits the business through increased productivity and satisfaction.

In early 2019, an international compliance firm with around 800 employees contacted AIS to help migrate multiple email accounts into a single Office 365 (O365) Exchange account. They invited AIS to continue their cloud journey and help them:

  • Understand their existing business processes and pain points across multiple time zones, countries, departments, and teams.
  • Provide their employees with a secure, reliable, and integrated solution to effective communication and collaboration.
  • Increase employee productivity by improving file and knowledge sharing and problem-solving.
  • Reduce cost from licensing fees for products duplicating features already available through the company’s enterprise O365 license.

Kicking Off a Customer Immersion Experience

First, AIS provided a Microsoft Customer Immersion Experience (CIE) demonstration, which served as the foundational step to introduce all O365 tools. After receiving stakeholder feedback, needs, and concerns, we collaboratively determined the best order for rolling out the O365 applications. The client selected to move forward with Microsoft Teams adoption as the first step to implementing collaboration software in the organization.

Pilots for Microsoft Teams Adoption

Next, we conducted a pilot with two departments to quickly bring benefits to the organization without a large cost investment and to gather insights that would inform the overall Teams adoption plan and strategy for the entire organization. We confirmed with pilot study employees that they saw and welcomed the benefits that Microsoft Teams provides, including:

  • Reduced internal emails.
  • Seamless communication and collaboration among (remote) teams/departments.
  • Increased productivity, efficiency, and transparency.
  • Centralized and accessible location for files, documents, and resources in Teams.

The pilot study also found that adopting Microsoft Teams in the organization would require a paradigm shift. Many employees were used to email communication, including sending attachments back and forth that was hard to track. In addition, while some departments had sophisticated collaboration tools, a common collaboration tool across the company did not exist. For web conferencing, for example, different departments preferred different tools, such as GoToMeeting and WebEx, and most of them incurred subscription fees. Employees had to install multiple tools on their computers to collaborate across departmental boundaries.

QUESTIONS ABOUT TEAMS ADOPTION PROCESS?

Embracing Benefits of Microsoft Teams with Organizational Change Management (OCM)

To help employees understand the benefits of Teams, embrace the new tool, and willingly navigate the associated changes. For the organization-wide deployment and Microsoft Teams adoption, we formed a project team with different roles, including: a Project Manager, Change Manager, UX researcher, Business Analyst, and Cloud Engineer. Organizational Change Management (OCM), User Experience (UX), and business analysis were as critical as technical aspects of the cloud implementation.

Building on each other’s expertise, the project team worked collaboratively and closely with technical and business leaders at the company to:

  • Guide communication efforts to drive awareness of the project and support it.
  • Identify levers that would drive or hinder adoption and plan ways to promote or mitigate.
  • Equip department leaders with champions and facilitate end-user Teams adoption best practices.
  • Guide end users on how to thrive using Teams through best practices and relevant business processes.
  • Provide data analytics and insights to support target adoption rates and customize training.
  • Use an agile approach to resolve both technical issues and people’s pain points, including using Teams for private chats, channel messages, and meetings.
  • Develop a governance plan that addressed technical and business evolution, accounting for the employee experience.

Cutting Costs & Boosting Collaboration

At the end of the 16-week engagement, AIS helped the client achieve its goals of enhanced collaboration, cost savings, and 90% Teams use with positive employee feedback. The company was well-positioned to achieve 100% by the agreed-upon target date.

Our OCM approach significantly contributed to our project success, which is grounded in the Prosci ADKAR® framework, a leading framework for change management based on 20 years of research. As Prosci described on their website, “ADKAR is an acronym that represents the five tangible and concrete outcomes that people need to achieve for lasting change”:

  • Awareness of the need for change
  • Desire to support the change
  • Knowledge of how to change
  • Ability to demonstrate skills and behaviors
  • Reinforcement to make the change stick

The OCM designed was to provide busy executives, leaders, and end-users with key support and insights for action to achieve each outcome necessary for Teams adoption efficiently and effectively.

If you would like to participate in a CIE demonstration or learn more about adopting cloud-based collaboration tools and practices in your company, we are here to help!

READ MORE ABOUT OUR SUCCESS WITH
ACA COMPLIANCE GROUP

Please note: The extended support deadline for Exchange 2010 has changed since the original publication of this article. It has moved from January 14 2020 to October 13 2020. There is still good time to consider your upgrade strategy and migrate your legacy Exchange environments to the Microsoft cloud!

Exchange 2010 is at the end of its journey, and what a long road it’s been! For many customers, it has been a workhorse product facilitating excellent communications with their employees. It’s sad to see the product go, but it’s time to look to the future, and the future is in the Microsoft Cloud with Exchange Online

What does End of Support mean for my organization?

email security exchange 2010 end of support risks

While Exchange 2010 isn’t necessarily vanishing from the messaging ecosystem, support of the product ends in all official capacities on January 14, 2020. Additionally, Office 2010 will be hitting the end of support on October 13, 2020, which means your old desktop clients will also be unsupported within the same year. What this means is that businesses using Exchange and Office applications will be left without support from Microsoft – paid or free. End of support also means the end of monthly security updates. Without regular security updates and patches from Microsoft to protect your environment, your company is at risk.

  1. Security risks – Malware protection and attack surface protection become more challenging as products are off lifecycle support. Any new vulnerability may not be disclosed or remediated.
  2. Compliance risks – As time goes on, organizations must adhere to new compliance requirements – for example, GDPR was a massive recent deadline. While managing these requirements on-premises is possible, it is often challenging and time-consuming. Office 365 offers improved compliance features for legal and regulatory requirements. The most notable is that Microsoft cloud environments comply with most regulatory needs, including HIPAA, FISMA, FedRAMP, and more.
  3. Lack of software and hardware support – Lack of technical support for problems that may occur such as bug fixes, stability and usability of the server, and time zone updates. Dropped support for interoperation with 3rd party vendors like MDM and message hygiene solutions can mean your end-user access can stop working. Not to mention the desktop and mobile mail solutions already deployed, or perhaps being upgraded, around this now decade-old infrastructure.
  4. Speaking of old infrastructure – This isn’t just about applications and services. For continued support and to meet compliance requirements, you must migrate to newer hardware to retain, store, and protect your mailbox and associated data. Office 365 absolves you of all infrastructure storage costs. That is a perfect opportunity, and often a justification in and of itself, to move to the cloud.

It’s time to migrate to Office 365 . . . Quickly!

There are many great reasons to move your exchange environment to a hosted environment in Office 365. The biggest one being your company will no longer have to worry about infrastructure costs.

Here are some other significant advantages that you won’t have to worry about:

  • Purchasing, and maintaining expensive storage and hardware infrastructure
  • Time spent keeping up to date on product, security, and time zone fixes
  • Time spent on security patching OS or updating firmware
  • Cost for licensing OS or Exchange Servers
  • Upgrading to a new version of exchange; you’re always on the latest version of Exchange in Office 365
  • Maintaining compliance and regulations for your infrastructure whether Industry, Regional or Government
  • With an environment of thousands of users and potentially unlimited mailboxes, absolving your admins from the day-to-day database, storage, server, and failover management is a huge relief and cost savings by having your team focus on Exchange administration.

Another big cost for on-premises is storage: data repositories for mailbox data retention, archiving, and journaling. This value cannot be overstated – do you have large mailboxes or archive mailboxes? Are you paying for an archiving or eDiscovery solution? If you consider Exchange Online Plan 2 licenses (often bundled into larger enterprise licenses such as E3 or E5) they allow for archive mailboxes of unlimited size. These licenses also offer eDiscovery and compliance options that meet the needs of complex organizations.

The value of the integrated cloud-based security and compliance resources in the Office 365 environment is immense. Many of our customers have abandoned their entire existing MDM solutions in favor of Intune. Data Loss Prevention allows you to protect your company data against exfiltration. Office 365 Advanced Threat Protection fortifies your environment against phishing attacks and offers zero-day attachment reviews. These technologies are just the tip of the iceberg and can either replace or augment an existing malware and hygiene strategy. And all these solutions specifically relate and interoperate with Exchange Online.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

Some other technologies that seamlessly work with Exchange Online and offer integral protections to that product as well as other Microsoft cloud and SaaS solutions:

  • Conditional Access – Precise, granular access control to applications
  • Intune – Device and application management and protection
  • Azure AD Identity Protection – Manage risk levels for associate activity
  • Azure Information Protection – Classify and protect documents
  • Identity Governance – Lifecycle management for access to groups, roles, and applications

Think outside the datacenter

data center transformation services

There are advantages to thinking outside the (mail)box when considering an Exchange migration strategy to the cloud. Office 365 offers an incredible suite of interoperability tools to meet most workflows. So while we can partner with you on the journey to Office 365, don’t overlook some of the key tools that are also available in the Microsoft arsenal. These include OneDrive for Business, SharePoint Online, and Microsoft Teams; all of which could be potential next steps in your SaaS journey! Each tool is a game-changer in their own right, and each will bring incredible collaboration value to your associates.

AIS has helped many customers migrate large and complex on-premise environments to Office 365.

Whether you need to:

  • Quickly migrate Exchange to Exchange Online for End of Support
  • Move File services to OneDrive and SharePoint Online for your Personal drives/Enterprise Shares/Cloud File Services
  • Adopt Microsoft Teams from Slack/HipChat/Cisco Teams
  • Migrate large and complex SharePoint farm environments to SharePoint Online

Whatever it is, we’ve got you covered.

What to do next?

Modern Workplace Assessment for Exchange 2010

Take action right now, and start a conversation with AIS today. Our experts will analyze the current state and rollout migration of your organization to Office 365 quickly and seamlessly.

To accelerate your migration to Office 365, let us provide you a free Modern Workplace assessment to evaluate comprehensively:

  • Organization readiness for adoption of Office 365 (Exchange and desktop-focused)
  • Desktop-focused insights and opportunities to leverage Microsoft 365 services
  • Develop total cost of ownership (TCO) for migrating Exchange users to Office 365 including fees for licensing
  • Migration plans cover detailed insights and approaches for service migrations such as…
    • Exchange to Exchange Online
    • File servers, personal shares, and Enterprise shares to OneDrive for Business and SharePoint Online
    • Slack / HipChat / Cisco Teams to Microsoft Teams
    • SharePoint Server to SharePoint Online

GET AN ASSESSMENT OF YOUR EXCHANGE 2010 ENVIRONMENT

Wrapping Up

Migrating your email to Office 365 is your best and simplest option to help you retire your Exchange 2010 deployment. With a migration to Office 365, you can make a single hop from old technology to state-of-the-art features.

AIS has the experience and expertise to evaluate and migrate your on-premise Exchange and collaboration environments to the cloud. Let us focus on the business of migrating your on-premises applications to Office 365, so you can focus on the business of running your business. This is the beginning of a journey, but something AIS is familiar and comfortable guiding you to seamless and successful cloud migration. If you’re interested in learning more about our free modern workplace assessment or getting started with your Exchange migration, reach out to AIS today.

NOT SURE WHERE TO START? REACH OUT TO AIS TO START THE CONVERSATION.


If you didn’t catch the first two parts of this series, you can do that here and here.  In this part, we’ll get a little more technical and use Microsoft Flow to do some pretty cool things. 

Remember when we talked about the size and quality of the images we take with our Power App and store as the entity image? When saved as the Entity Image for a CDS/D365 item, the image loses quality and is no longer good for an advertisement photo.  This is done automatically and as far as I can tell, the high-res image is gone once this conversion takes place (someone please correct me if I’m wrong on that!).  On the flip side of that, it doesn’t make a whole lot of sense to put all this tech together only to have my end users be required to take two pictures of an item, one for hi-res and one for low-res.  We don’t want to store a high-res in a relational database for 10,000 plus items because the database could bloat immensely.

Microsoft Flow and SharePoint to the rescue!  

PRO TIP:  Dynamics 365 will crop and resize the image before saving it as the entity image.  All entity images are displayed in a 144 x 144 pixel square.  You can read more about this here.  Make sure to save/retain your original image files.  We’re going to stick ours in a SharePoint Picture Gallery App.

Objective 

Create a Microsoft Flow that handles… 

  • Pulling the original image off the Dynamics record and storing it in SharePoint. 
  • Setting the patch image to the Entity Image for the Dynamics record 
  • Create an advertisement list item for the patch 
  • Save the URLs for the ad and image back to the patch record 

Create the Flow 

We’re going to write this Flow so that it’s triggered by a Note record being created. 

 Flow screenshot with Create from blank highlighted

  • On the next page, click “Search hundreds of connectors and triggers” at the bottom of the page. 
  • Select Dynamics 365 on the All tab for connectors and triggers. 
  • Select the “When a record is created” trigger. 

 Dynamics 365 is highlighted

  • Set the properties for Organization Name and Entity Name.  Entity Name should be “Notes”. 
  • Save the Flow and give it a name. 

Verifying a Few Things 

  • Add a new step and select the Condition item. 
  • The Condition should check to see if the Note has an attachment. We do this using the “Is Document” field.  

 Condition Control is highlighted 

  • In the “Yes” side of the conditional we want to check if the Object Type is a Patch (ogs_patch in this case).  

At this point, if the Flow has made it through both conditionals with a “Yes”, we know we are dealing with a new Note record that has an Attachment and belongs to a Patch record.   

Update the Patch Record 

Now we want to update the batch record’s Entity Image field with the attachment.  First we need to get a handle on the Patch record.  We’ll do that by adding an Action to the Yes branch of our new Conditional. 

  • Add a Dynamics 365 Update a Record Action.
  • Set the Organization Name, Entity Name, and Record identifier accordingly.  For our Patch Record identifier, we’ll use the Regarding field in the Dynamic content window. 

 

  • Click on Show advanced options and find the Picture of Patch field. 
  • For the Picture of Patch field we need to get the document body of the attachment and convert it from Base-64 encoding to binary.  We do this using the “Expression” area again.  Use the “base64ToBinary” function to convert the document body like so. 

 

  • Save your work!  I can’t tell you how many times I had to retype that function. 

Create Our SharePoint Items & Clean-up 

Now that we’ve updated our entity image with the uploaded patch picture we want to do a couple of things, but not necessarily in sequence.  This is where we’ll use a parallel branch in our Flow.   

Dealing with a Parallel Branch 

  • Under the last Update a Record action, add a Conditional.  After adding this Conditional hover over the line between the Update action and the new conditional.  You should see a plus sign that you can hover over and select “Add a parallel branch.” 



  • Select this and add a Compose action.  You may need to search for the Compose action. 

 

PRO TIP:  With Modern Sites in SharePoint, we now have three solid options for displaying images in SharePoint.  The Modern Document Library allows viewing as tiles and thumbnails within a document library, the Picture Library which has often been the place to store images prior to the Modern Document Library, and then we can simply just display an image, or images, on a page directly.

Saving the Attachment as an Image in SharePoint

  • Let’s deal with Compose branch first.  Our compose will have the same function as our Picture of Patch did above for the Input field.  base64ToBinary(triggerBody()?[documentbody’]) 
  • After the Compose, we’ll add a Create File Action for SharePoint and use the name from our Patch record as the name for our image in SharePoint.  I’m using a Picture Gallery App in SharePoint and for now, only using the .JPG file type.  The File Content should use the Output from our Compose Action. 

 

Delete the Note

  • Finally, we want to delete that Note from Dynamics (and the Common Data Service) so that the image attachment is no longer taking up space in our Common Data Service.  Add a Dynamics Delete a Record Action after the SharePoint Create file action.  Set the Organization Name, Entity Name, and use the Dynamics content for Note as the Item identifier.

 

Creating Our Advertisement

Let’s jump back to the new Conditional we added after the Update a record Action where we set the entity image. 

  • Set the conditional to check for the Generate Advertisement field being set to true. 
  • If this is true, add a SharePoint Create Item Action and let’s set some values.  What we’re doing here is creating a new SharePoint List Item that will contain some starter HTML for a Patch advertisement. 
  • Save our work! 

 

 

Updating Our Patch Record With Our URLs From SharePoint

  • Under the SharePoint Create Item Action for creating the Ad, AND after the SharePoint Create file action for creating the picture in the Picture Gallery, we’re going to add Dynamics Update record Actions that will be identical with one difference. 
  • The Organization Name, Entity Name, Record Identifier (set to Dynamic Content “Regarding”) should be the same. 
  • On the Ad side, the Update record should set the SharePoint Ad for Patch field to “Link to Item”. 

 

  • On the image side, the Update record should set the SharePoint Image for Patch to the “Path” 

 

Seeing It In Action 

Of course, I’ve been saving my work so let’s go ahead and give this a whirl. 

  • At the top right of your Flow you’ll see a Test button.  We’re going to click that and select “I’ll perform the trigger action.” 
  • To make this more interesting, I’m going to run this from SharePoint! I’ll update a patch and kickoff my Flow from the embedded Power Apps Canvas App on my SharePoint home page. 

 

  • I select the patch, then I click the edit button (pencil icon at the top right). 
  • Notice the Attach file link and the Generate Advertisement switch.  We’ll use the first for our image and the second for generating our ad item in SharePoint. 

 

  • Finally, I click the checkmark at the top right to save my changes.  This kicks off our Flow in less than a minute, and when we navigate back over to the Flow we can see that it completed successfully. 

 Verifying the flow

  • I’ll hop back over to SharePoint to make sure that my ad was created and my entity image was set.  I’ll also make sure the high-quality image made it to the SharePoint Picture Library and the Note was deleted from the Patch record in Dynamics.  I also want to make sure the URLs for the ad and image in SharePoint were set back to the Patch record. 

verifying in SharePoint Verifying in SharePoint image

One last thing: When we store the image in a SharePoint Picture Gallery App we can retain the dimensions, size, and quality of the original image, unlike when storing the image as a Dynamics 365 entity image.  Check out the properties in the next screen shot and compare that to the properties on the SharePoint page in the same screen shot.   


Comparing image file sizes

Conclusion 

I hope you are enjoying this series and continue to tune in as the solution for our dad’s beloved patch collection grows.  I constantly see updates and upgrades to the Power Platform so I know Microsoft is working hard on making it even better. 

A variety of screens displaying Power Platform capabilities
Microsoft recently released a lot of new capabilities in their business applications, including the Microsoft Power Platform, which combines Flow, Power BI, Power Apps, the Common Data Service for apps, and Dynamics 365. To help people gain insights into the power of these applications, the Microsoft Technology Center in Reston, VA offered a Microsoft Business Applications Workshop for Federal Government, which I attended with two AIS colleagues.

As a User Experience (UX) Researcher who joined AIS earlier this year, I am new to Microsoft business applications. In addition, code writing is not my job responsibility and expertise, unlike my two colleagues. However, I found the workshop intriguing and registered for it right away because it was designed to:

  • Help people gain an understanding of the business applications
  • Be “interactive,” with hands-on opportunity for attendees to build a working application
  • Include topics like “solution envisioning and planning” and “no-code business workflow deployment” (Note that the workshop did offer coding exercises for developers on the last day of the workshop, which I did not attend.)

Indeed, attending the workshop allowed me to see the possibilities of these Microsoft applications, which is very relevant to what I do as a UX Researcher. It motivated me to further explore resources on this topic to better meet the needs of our current and future clients.

The User-Centered Design Process

The first project that I worked on after joining AIS was to help a client understand their employees’ needs and collect user requirements for a new intranet to be built on Office 365. In addition, the key stakeholders wanted to:

  • Streamline and automate their business processes, workflows, and document management
  • Drive overall collaboration and communication within the organization

I had extensive experience conducting user research for websites and web applications. To collect employee insights for this new intranet, we followed a user-centered design process:

  1. We started by interviewing stakeholders, content owners, and general employees to understand:
    • Their existing intranet use, areas that worked well, and areas that needed to improve
    • Intranet content that is important to them
    • Existing business processes, workflows, document management, internal collaboration, and communication
  2. Based on the interview findings, we then:
    • Compiled a list of important content pieces that the new intranet should include
    • Set up an online card sorting study for the employees to participate to inform the information architecture (IA) of the new intranet
    • Documented employees’ needs and expectations in other business areas
  3. Proposed a draft IA for the new intranet based on card sorting findings
  4. Developed a wireframe intranet prototype (using Axure), which reflected the draft IA, contained employee desired content, and mimicked the Office 365 structure and capabilities
  5. Conducted remote usability testing sessions with stakeholders and general employees to evaluate the wireframe prototype
  6. Finalized the intranet prototype and documented UX findings and recommendations to help developers build the new intranet using Office 365 in the next phase

As shown above, we made sure that the Intranet would meet the needs and expectations of the stakeholders and general employees, before it was coded and developed. However, as a UX researcher who does not code, I did not develop our solutions using the Microsoft business applications. I was curious to see how my technical colleagues would apply the capabilities of these applications to improve, streamline, and automate business processes and workflows.

Our user research showed that employees experienced a lot of frustration and pain points during their daily work. For example, both managers and general employees complained that their business processes heavily relied on emails, email attachments, and even hand-written notes, which were easy to miss or misplace and hard to locate. They described how difficult it was for them to keep track of project progresses and updates, especially when people from multiple departments were involved. Some of them also mentioned they had to manually enter or re-enter data during a workflow, which was error-prone. All these were real and common business process problems.

The Power of the Power Platform

This workshop provided me with a starting point and a glimpse into the power of the business applications. I’m still learning about their full power, the technical descriptions or details, and the rationale or logic behind each step that we went through when we built the model-driven app during the workshop. However, I was excited to walk away knowing about:

  • The use of a single, connected, and secure application platform to help organizations break down silos and improve their business outcomes
  • The availability of hundreds of out-of-box templates, connectors, and apps, including those that our client can take advantage of and easily customize, such as for onboarding tasks, leave requests, expense reimbursements, and shout-outs to co-workers
  • Building solutions and applications quickly and easily with simple drag-and-drop user interface, without the need to write a single line of code
  • Higher work efficiency of business people and non-developers to achieve what they want to do independently, relying less on IT support or developers, reducing overall cost, and saving time

After the workshop, I found a wealth of online resources and videos on Microsoft Business Applications. Below are some Microsoft webpages that describe the similar content or steps that we went through during the workshop:

I look forward to more in-depth learning about this topic to better understand the power of Microsoft business applications. With this knowledge and together with my colleagues, we will propose and build the best business solutions based on user research, helping our clients achieve desired outcomes by improving their employee experience.