This blog post is for all developers of all levels that are looking for ways to improve efficiency and time-saving ideas. It begins by providing some background on me and how my experience with Microsoft Excel has evolved and aided me as a developer. Next, we cover a scenario where Excel can be leveraged to save time. Finally, we go over a step-by-step example using Excel to solve the problem.

Background

As a teenager growing up in the 80s, I was fortunate enough to have access to a computer. One of my favorite applications to use as a kid was Microsoft Excel. With Excel, I was able to create a budget and a paycheck calculator to determine my meager earnings from my fast food job. As my career grew into software development, leveraging all of the tools at my disposal as a solution against repetitive and mundane tasks made me more efficient. Over the years, colleagues have seen solutions I have used and have asked me to share how I came up with and implemented them. In this two-part blog post, I will share the techniques that I have used to generate C#, XML, JSON, and more. I will use data-loading in Microsoft Power Apps and Dynamics as a real-word example; however, we will need to start with the basics.

The Basics

Before going into the data-loading example, I wanted to provide a very simple example. Keep in mind that there may be more effective solutions to this specific example that do not use Excel; however, I am using it to illustrate this simple example. Let’s say you had a data model and a contact model that, for the most part, were the same with the exception of some property names, and you needed to write methods to map them. You know the drill:

var contact = new Contact();
contact.FirstName = datamodel.firstName;
contact.LastName = datamodel.lastName;
contact.PhoneNumber = datamodel.phoneNumber;
contact.CellPhone = datamodel.mobileNumber;

Not a big deal, right? Now let’s say you have a hundred of these to do and each model may possibly have 50+ properties! This would very quickly turn into a time consuming and mundane task; not to mention you would likely make a typo along the way that another developer would be sure to let you know about in the next code review. Let us see how Excel could help in this situation.

In this scenario, the first thing you will need is the row data for the contact and data models. One way would be using the properties. Consider the classes below:

Use Properties to Identify Classes

  1. Create 3 Excel worksheets called Primary, Secondary, and Generator
  2. Copy/paste the property statements from Contact into Primary worksheet and ContactDataModel into a Secondary worksheet.
  3. Select Column A in the Primary worksheet
    Create three Excel Worksheets
  4. In Excel, select the Data tab and then Text to Columns
  5. Choose Delimited, then Next
    Choose Delimited
  6. Uncheck all boxes and then check the Space checkbox, then Finish
    Uncheck All Boxes
  7. Your worksheet should look like the following:
    Sample of Worksheet
  8. Repeat 3-7 with the Secondary worksheet
  9. Select cell A1 and then press the = key
  10. Select the Primary worksheet and then cell D1
  11. Press the Enter key, you should return to the Generator worksheet and the text “FirstName” should be in cell A1
  12. Select cell B1 and then press the = key
  13. Select the Secondary worksheet and then cell D1
  14. Press the Enter key, you should return to the Generator worksheet and the text “firstName” should be in cell A1
  15. Drag and select A1:B1. Click the little square in the lower-right corner of your selection and drag it down to row 25 or so. (Note: you would need to keep dragging these cells down is you added more classes.)
    You will notice that by dragging the cells down, it incremented the rows in the formula.
    Incremented Rows in the Formula
    Press CTRL+~ to switch back to values.
  16. Select cell C1 and enter the following formula:
    =IF(A1=0,””,A1 & “=” &B1&”;”)
    As a developer, you probably already understand this, but the if statement is checking to see if A1 has a value of 0 and simply returns an empty string if so. Otherwise, string concatenation is built.
  17. Similar to an earlier step, select cell C1 and drag the formula down to row 25. Your worksheet should look like:
    Select and Drag Formula
  18. You can now copy/paste the values in column C into the code:
    Copy and Paste Values into Column C

As you continue on, Excel keeps track of the most recent Text to Columns settings used; so, if you pasted another set into the Primary and Secondary worksheets, you should be able to skip steps 1-5 for remaining classes. In the sample class file and workbook, I have included Address models as an illustration.

Next Steps

This example has covered the basic concepts of code generation with Microsoft Excel: extracting your data and writing the formulas that generate the necessary code. Depending on what you are trying to accomplish, these requirements may grow in complexity. Be sure to consider the time investment and payoff of using code generation and use where it makes sense. One such investment that has paid off for me is data loading in Microsoft Power Apps which we will cover in the next post: Code Generation with Microsoft Excel: A data-loading exercise in Microsoft Power Apps.

Download Example Workbook

Download Address Models

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for how to anonymize or customize the data that is being sent to Application Insights. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series.

Two things to consider with telemetry:

  1. Adding additional telemetry that is not recorded by Application Insights.
  2. Removing or anonymizing data that is already being sent to Application Insights.

There will a later post in this series discussing how to add new telemetry, this post focuses on anonymizing or removing data.

Personally Identifiable Information (PII) Already in Application Insights Account

We’ll start with a likely scenario, during an audit or during testing you discovered that you are logging PII to your Application Insights account. How can you go about fixing that? The short answer is to delete the entire Application Insights resource. That means you’ll lose access to all historical telemetry that was in the account and your production system will no longer be logging telemetry anywhere unless you create a new account and update your production system with the new telemetry key. However, this does solve your immediate problem of retaining PII. See the Microsoft documentation, for details on what is captured by Application Insights, how it’s transmitted and stored.

Application Insights does provide a PURGE endpoint, but requests are not timely, the endpoint isn’t well documented and it will not properly update metrics to account for the data that was purged. In general, if you have a compliance concern, delete the Application Insights account. Remember, Application Insights is designed to be a highly available high-performance telemetry platform, which means it is designing around being an append-only system. The best solution is simply not to send data to the platform that you shouldn’t.

API Use Case

Think of an API your business may have built. This API allows us to search for customers by e-mail to find their customer id. Once we have the customer id, we can make updates to their record such as their first name, birthday, etc. By default, Application Insights records the request URL and the response code. By default, it does NOT record any HTTP headers or the request body or the response body. First, let’s think of how we might design the search endpoint, we have two approaches:

  1. GET /api/customer/search?emailAddress=test@appliedis.com
  2. POST /api/customer/search
    a. In the request body, send JSON:
    { “emailAddress”: “test@appliedis.com”}

If we design the API using approach #1, we will be sending PII to Application Insights by default. However, if we designed the API using approach #2, by default no PII would be sent to Application Insights. Always try and keep your URLs free of any PII.

That may not always be possible, so let’s look at another use case. Let’s say we have the primary key of a customer record and we want to view and make edits to that record, what would the endpoints look like:

  1. GET /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6
  2. PUT /api/customer/9b02dd9d-0afd-4d06-aaf1-c34d3c051ec6

Now, depending on your regulatory environment logging these URLs to Application Insights might present a problem. Notice we are not logging e-mail addresses, phone numbers or names; we are logging behavior about an individual. Pay attention to when the site was accessed and when was their profile updated? To avoid this we would like to anonymize the URL data that is being sent to Application Insights.

Anonymize Data Sent to Application Insights

This section assumes you are using ASP.NET Core and have already configured Application Insights, see my previous blog post for details.  Also, if you need to troubleshoot your configuration or need to verify it’s working as expected, please see my other blog post for details.

The Application Insights NuGet package provides an interface for exactly this purpose called ITelemetryProcessor. You simply need to subclass it and implement the Process method. The Telemetry Processor implementation acts much like ASP.NET Core middleware, in that there is a chain of telemetry processors. You must provide a constructor in your implementation that accepts an ITelemetryProcessor which is next in the chain. In your process method, you are then responsible for calling onto the next processor in the chain. The last processors in the chain are the ones provided by the NuGet package that implements the same interface and sends the telemetry over the wire to the actual Application Insights service in Azure. In the Process method which you are required to implement, you receive a single argument, ITelemetry. You can cast that to one of the subclasses, e.g. DependencyTelemetry, RequestTelemetry, etc. In the Process method, you can then mutate the telemetry in whatever way you need to, e.g. to anonymize data. You’ll then be responsible for calling the Process method on the next telemetry processor in the chain, e.g. the one that was provided to the constructor of your class. If you want the given telemetry item to never be sent to Application Insights, simply omit the call to the Process method of the next telemetry processor in the chain.

Now we will look at the source code for one that does what we are proposing. This will be for anything that resembles a customer id in the URL of RequestTelemetry and then replaces it with the word “customerid”.
RequestTelemetry

As seen above, in the constructor we receive the next telemetry processor in the chain. In the process method, we check to see if we have RequestTelemetry, e.g. ignoring all other telemetry types, like TraceTelemetry or DependencyTelemetry. The RequestTelemetry has a .Name and .Url property both of which might contain details about the URL which contains our PII. We use a regular expression to see if either contains a customer id, if so, we replace it with the word “customerid”. And then we always ensure that we call the next telemetry processor in the chain so that the modified telemetry is sent to Application Insights.
Remember, just having this ITelemetryProcessor coded won’t do anything. We still need to register it with the Application Insights library. In the Startup.cs, you will need to add a line to the ConfigureServices method, see below:

Add a Line to the ConfigureServices Method

Now that you’ve registered the processor, it’s just a matter of deploying the code. If you wanted to debug this locally while still sending telemetry live to Application Insights in Azure, please see my blog post with details on how to do that.

Summary

You learned about the ITelemetryProcessor extension point in the Application Insights NuGet package and how to use that extension point to prevent PII data from being logged to your Application Insights account. We’ve discussed how to design your API endpoints efficiently by default so that hopefully you don’t need to customize your telemetry configuration. Lastly, I have shown you how to delete PII data that may have been logged accidentally to your production Application Insights account. You may also want to take advantage of a relatively new feature of Application Insights to set the retention period of logs. Previously it was only 90 days, but now you can configure it in as low as 30 days. This may be useful for handling forgotten requests if your logging system is storing PII and you need to ensure it is removed within 30 days of being requested. Next, in this blog series, we will discuss logging additional telemetry to create an overall better picture of your system’s health and performance.

Once you’ve decided to instrument your ASP.NET Core application with Application Insights, you may be looking for a quick way to troubleshoot telemetry configuration. For details on why you should be using Application Insights and how to get started, please reference my previous post in this blog series. How do you go about testing your telemetry configuration? Typically, developers would adjust the application and would then deploy to Azure, ideally your development environment in Azure. However, making a change to an application, building, publishing to Azure and testing the given endpoints. Waiting for telemetry to appear in Application Insights can require upwards of 30 minutes per change assuming that you know what you are doing and without making any mistakes. Is there a way to work locally?

Troubleshooting Locally

Let me start by discussing the end goal. We want to simulate our production environment while running locally on our dev machine. Azure typically provides local emulators for services like storage and Cosmos. They do not provide an Application Insights emulator. When running locally we want to simulate production, but we will need to send data to an actual Azure Application Insights account. In order to simulate running in production, we should publish our Dotnet application in Release mode and begin outside of Visual Studio. The reason for starting the application outside Visual Studio is that our production environment will not have Visual Studio installed. Another reason for starting the application outside Visual Studio is because Visual Studio includes a Diagnostic panel that captures the Application Insights telemetry and prevents it from being sent to the Azure Application Insights account. I’d like to emphasize that the Diagnostics panel built into Visual Studio is not an emulator and shouldn’t be used for that purpose.

First, we must publish the application in Release mode. You can do that using the dotnet command-line as shown below.

Publish in Release Mode

This will publish to a directory similar to the below,

Publish to Directory

Once in the directory where the build artifacts are, we should find both appsettings.json and the .dll for our main application, CustomerApi.dll in my case. From the command-line. We can then run Kestrel directly, using the following command-line.

Run Kestrel Directly

If using the defaults, your application will now be running and available in a browser at either http://localhost:5000/ or https://localhost:5001/. We are likely still missing one step, which is configuring the telemetry key for Application Insights. In the bin\Release\netcoreapp3.0\ folder, locate the appsettings.json. Open the file and put the telemetry key in the file.
configuring the telemetry key

If you go back to the command-line you can press Ctrl+C to exit the running web application and then re-run the dotnet CustomerApi.dll command to restart the application. We now have an application running locally that is sending telemetry to Application Insights in Azure.

View Live Telemetry from Local Machine

In the Azure portal, open the Application Insights resource and then locate the “Live Metrics Stream” blade.
Live Metrics Stream

The Live Metrics panel should open and connect as long as the application is running locally using “dotnet CustomerApi.dll”. Once open, scroll to the bottom of the pane.

Bottom of Pane

At the bottom, you will see a list of connected servers. In my example below, you see two servers. The one highlighted in red is my local developer machine. The other server is the Azure Application Service that I have running in my development environment in Azure.

Developer and Azure Application Server

To quickly recap we have our application running locally outside Visual Studio on a command-line and in Azure Application Insights, we can see our local machine is connected up to live metrics. In order to actually see telemetry flow into this panel, you will likely want to make one other change. In the upper-right, click on the filter icon to adjust the live metrics filters.

Telemetry Sample

You will then be prompted with the following dialog. If you trust the servers you can safely ignore.

Authorize Connected Servers

You will then see a dialog with the current filters. Notice that the default configuration will only show failed requests and dependency calls. Since we are troubleshooting it’s likely you will want to see all requests. Feel free to click the “x” to remove both filters and then the “Save” button.

Current Filter Dialogue

Once you have completed this step, you can go back to your web browser on your local machine, either http://localhost:5000/ or https://localhost:5001/ and then make a request to your API. I tried a URL I know returns a 404 response. You can see the live telemetry that showed up for me:

Sample Telementry

Then, click on that row for more details about the given telemetry item. This telemetry is also being logged to Application Insights and you will be able to see it on all the usual dashboards and search for it using Log Analytics query, just be aware there is still the typical 2 to 5-minute delay between when it is sent and when it will appear in queries and dashboards.

Summary

You have now learned how to troubleshoot Azure Application Insights quickly and without needing to deploy your application to Azure. To summarize, you run “dotnet publish” in “Release” mode locally and then run the application from the command-line outside Visual Studio. This is done for a few reasons:

  • When publishing in release mode, you do not need to worry about appsettings.development.json
  • By running outside Visual Studio, you do not need to worry about launchSettings.json setting any special environment variables that don’t match your production environment (e.g. ASPNETCORE_ENVIRONMENT)
  • When running outside Visual Studio, you do not need to worry about the diagnostics panel deciding to capture your telemetry and preventing it from being sent to Azure.

Once your application is running locally and has the Application Insights telemetry key configured properly, you will find the telemetry in the “Live Metrics” view so you can avoid the typical 2 to 5-minute delay between sending telemetry and will see it elsewhere in Application Insights.

If you are concerned that this setup will not allow for the use of the Visual Studio Editor, think again! Once you have the application running outside Visual Studio, simply use the “Attach To Process…” menu item in Visual Studio. This gives you the best of both worlds:

Visual Studio Debugger

Hopefully, this post helped you understand how to more quickly how to troubleshoot your Application Insights telemetry configuration. That will come in handy in the next post for this series, where we talk about customizing telemetry to keep PII (Personally Identifiable Information) out of your application insights logs.

Implementing a cloud strategy and modernization of legacy systems are two of the core objectives of most IT organizations. Purchasing a COTS product or SaaS offering can speed up modernization and come with a lot of advantages. COTS products come with a proven track record and address specific business needs that would be difficult to justify building your own. COTS product shifts the liability of creating features to the COTS product instead of your organization. Finally, COTS products promise a shorter timeframe to implementation. Even though you’ll be purchasing a solution to 80% of your problem with the COTS product, the hardest parts of implementing a COTS product are still your responsibility. Below are some key areas you will still own.

Security and Infrastructure

Security and Infrastructure are your responsibility. Off the shelf product or Software as a Service (SaaS) product won’t address this. If this is a SaaS product, how will your Hybrid network access it and how is that access govern? You’ll need to do a risk assessment of this SaaS product which includes how you connect to it, how it stores its data, and even how it’s creating the software. If this is an off the shelf product, how will it be installed in the cloud? Ask if it can is cloud-native or does it need to run on virtual machines in the cloud. If it’s running on virtual machines are those hardened and who have access to them. Cloud virtual machines complicate networking since they need to be accessed from on-prem and they may still need to reach into the cloud or the internet. That can leave a wide surface vector you’ll need to account for. Security and Infrastructure is the biggest concern and you’ll need to own it.

Automation

One of the promises of moving to the cloud is gaining business agility. Automation is a key component for reaching that goal. SaaS product removes the burden from deploying and maintaining the application, but there may still be a necessity to automate some aspects. For example, if the SaaS product must be configured, it might have a UI and an API for doing this. It’s in your best interest to involve this SaaS Product into your normal development pipeline and consider infrastructure as code as it applies to the SaaS product. If you purchase a COTS produce be sure you can stand up an entire environment including installation of this product with a click of a button. There is no excuse for not automating everything and there are plenty of tools in the Azure DevOps pipeline for integration and automation.

Integration

COTS product provides 80% of the functionality needed, but what about the other 20%? The remaining functionality the product doesn’t provide is likely what differentiates your company from the others. There are always edge cases and custom functionalities that you’ll want, that either the vendor does not provide or it’s expensive for the vendor to implement. You’ll need to bring that work in-house or hire a system integrator to hook things together. Whether it’s a SaaS product or a COTS product, integration to applications that provide the remainder of your functionally is your responsibility as well as understanding how integration will work when purchasing the COTS product. A good COTS product will have a solid HTTP REST API. If the product you’re purchasing doesn’t have a simple API, consider a wrapper to make the integration easier. API Management is an Azure service that can do that translation for you. You might find that you want to combine COTS API to one that makes sense to the rest of the systems. Large COTS products should also support messaging of some kind. Messaging helps build loosely coupled components with high cohesion. COTS products might offer file-based and database integration. However, these should be avoided. Integration of the COTS product is your responsibility and the effort can equal the implementation effort of the COTS product itself.

Conclusion

COTS products can provide great benefits to your company and welcome new functionality quickly. Understand that your IT department will still have to drive the overall architecture and you are responsible for everything around the COTS product. The bulk of this work will fall under Security, Infrastructure, Automation, and Integration. Focus on these concerns and you’ll have a successful implementation.

These disciplines can play a significant role in building stable release processes that help ensure project milestones are met.

Continuous Integration (CI) and Continuous Delivery (DC) are rapidly becoming an integral part of software development. These disciplines can play a significant role in building stable release processes that help ensure project milestones are met. And in addition to simply performing compilation tasks, CI systems can be extended to execute unit testing, functional testing, UI testing, and many other tasks. This walkthrough demonstrates the creation of a simple CI/CD deployment pipeline with an integrated unit test.

There are many ways of implementing CI/CD, but for this blog, I will use Jenkins and GiHub to deploy the simple CI/CD pipeline. A Docker container will be used to host the application.  The GitHub repository hosts the application including a Dockerfile for creating an application node. Jenkins is configured with GitHub and Docker Plugin. Read More…

Inflexible customer solutions and business unit silos are the bane of any organization’s existence. So how does a large, multi-billion dollar insurance organization, with numerous lines of business, create a customer-centric business model while implementing configurable, agile systems for faster business transactions?

The solution is not so simple, but with our assistance, we’ve managed to point our large insurance client in the right direction. What began as a plan to develop a 360-degree customer profile and connect the disparate information silos between business units ultimately became the first step towards a more customer-centric organization.

A major multi-year initiative to modernize the organization’s mainframe systems onto the Microsoft technology platform will now provide significant cost savings over current systems and enable years of future business innovation. Read More…

In the world of SharePoint upgrades and migrations, a number of terms are thrown around and often used interchangeably. This post outlines several key terms that will be surfaced throughout a three-part series on upgrade/migration strategies for SharePoint 2013. If you would like to jump to another post, use the links below:

  • Part 1 – Definitions (this post)
  • Part 2 – Considerations Outside of SharePoint (Coming soon)
  • Part 3 – Diving into Database Attach (Coming soon)

In past revisions of SharePoint, we had multiple ways to upgrade our farms (and the content within them) to the latest version using the tooling Microsoft provides. Over the years, Microsoft used a number of terms related to the types of upgrade available:

  • In-place upgrade – Often considered the easiest approach, but the most risky. The setup of the new system is performed on existing hardware and servers.
  • Gradual upgrade – Allows for a side-by-side installation of the old and new versions of SharePoint.
  • Database attach/migration – Allows for the installation and configuration of an entirely new environment where content is first migrated, and then upgraded to the desired state.

As SharePoint matured, the number of available upgrade options dwindled. For instance, in an upgrade from SharePoint Portal Server 2003 to Office SharePoint Server 2007, we could follow any one of the three upgrade paths noted above to reach our desired end state. In an upgrade of Office SharePoint Server 2007 to SharePoint Server 2010 we still had two paths available: the in-place upgrade and the database attach approach. For SharePoint 2013, we’re left with just the database attach approach.

Before we dive further into the database attach upgrade scenario, it’s helpful to take a step back and establish a common language as we discuss the upgrade process. Read More…

Despite the terms Dependency Inversion Principle (DIP), Inversion of Control (IoC), Dependency Injection (DI) and Service Locator existing for many years now, developers are often still confused about their meaning and use.  When discussing software decoupling, many developers view the terms as interchangeable or synonymous, which is hardly the case.  Aside from misunderstandings when discussing software decoupling with colleagues, this misinterpretation of terms can lead to confusion when tasked with designing and implementing a decoupled software solution.  Hence, it’s important to differentiate between what is principle, pattern and technique.
Read More…

In this blog I’ll discuss some post-release reporting issues that we faced for one of our projects and the solutions we implemented. On the technology side, we had SQL Server 2008 R2 and MVC 4.0 application (which were hosted in Amazon Web Services) in our production environment.

The Problem

The post-production release reporting system was not responding as per the user expectations. For most of the high-volume reports (50K rows to 200K rows in report output), we were getting request timeout error. Client SLA for response time was two minutes; hence any report (big or small) must return data within two minutes. All the reports were designed using SQL Server Reporting Services 2008 R2. In all there were close to 40 reports with such timeout issues. Read More…

System Center 2012 is all about cloud computing — it provides IT as a Service, so it offers support for heterogeneous environments extending from a private cloud to the public cloud.

Trying to describe what you can accomplish with Microsoft System Center 2012 is akin to defining what a carpenter can build when he opens his toolbox. The possibilities are virtually limitless. When all of the System Center 2012 management components are deployed, administrators and decision makers have access to a truly integrated lifecycle management platform. The seven core applications can each be deployed independently to provide specific capabilities to an organization, but are also tightly integrated with each other to form a comprehensive set of tools.

System Center 2012 is offered in two editions, Standard and Datacenter, with virtualization rights being the only difference. The simplified licensing structure is identical to that of Windows Server 2012. Further simplifying the licensing, SQL Server Standard is included and no longer needs to be licensed separately. The applications that make up the System Center 2012 suite, however, cannot be licensed individually, so it makes sense to have an idea of what each application can do, and how it fits into your environment.  Read More…