This is the second of two blog posts that will share solutions to common Azure DevOps Services concerns:

Storing Service Connection Credentials

To deploy Azure infrastructure from an Azure DevOps pipeline, your pipeline agent needs permission to create resources in your subscription. These permissions are granted in Azure DevOps with a Service Connection. You have two options when creating this service connection:

  1. Service Principal Authentication
  2. Managed Identity Authentication

If you choose to use a Service Principal, you must store a secret or certificate of an Azure Active Directory (AAD) managed application in Azure DevOps. Since this secret is equivalent to a service account password, some customers may be uncomfortable with storing this in Azure DevOps. In the alternative option, Managed Identity Authentication, you configure your pipeline agent as a system-assigned managed identity and give that identity the necessary permissions to your Azure resources. In this configuration, the certificate needed to authenticate is stored on the pipeline agent within your private network.

Storing Sensitive Information in the Backlog

It is important for the User Stories in the backlog to be well defined so that everyone involved in the development process understands how to define “done”. In some environments, the requirements themselves could contain information that is not appropriate to store in Azure DevOps.

You will need to ensure your team has a strong understanding of what information should not be captured in the backlog. For example, your organization may not allow the team to describe system security vulnerabilities, but the team needs to know what the vulnerability is to fix it. In this case, the team may decide to define a process in which the backlog contains placeholder work items that point to a location that can safely store the details of the work that are considered too sensitive to store within the backlog itself. To reinforce this process, the team could create a custom work item type with a limited set of fields.

New Security Vulnerability

This is an example of how a tool alone, such as Azure DevOps, is not enough to fully adopt a DevOps. You must also address the process and people.

Storing Sensitive Information in Code

Depending on your organization and the nature of your project, the content of the source code could also become an area of concern. Most developers are aware of the techniques used to securely reference sensitive values at build or runtime so that these secrets are not visible in the source code.

A common example is using a pointer to an Azure Key Vault Secrets in your code that is used to pull the secret down at runtime. Key Vault makes it easy to keep secrets out of your code in both application development and in infrastructure as code development by using the secret reference function in Azure Resource Manager (ARM) Template or retrieving them at compile time in your Desired State Configurations (DSC).

Features in some code scanning tools, such as Security Hotspot detection in SonarQube, can be used to detect sensitive values before they’ve been accidentally checked in, but like backlog management do not rely entirely on tools to get this done. Train your team to think about security throughout the entire development lifecycle and develop a process that can detect and address mistakes early.

Storing Deployment Logs

Even if using a private pipeline agent that is running on your internal network, deployment logs are sent back to Azure DevOps. You must be aware of any logs that contain information that your organization’s policies do not allow to be stored in Azure DevOps. It is difficult to be certain of the contents of the logs as they are not typically included in any type of quality assurance or security testing, and because they will constantly evolve as your application changes.

The solution is to filter the logs before they leave your internal network. While the pipeline agent has no configuration option to do this, you can design your pipeline to execute deployment jobs within a process you control. David Laughlin describes two ways to achieve this in his post Controlling Sensitive Output from Azure Pipelines Deployment Scripts.

Conclusion

The Azure DevOps product is becoming increasingly valuable to government development teams as they adopt DevSecOps practices. You have complete control over Azure DevOps Server at the cost of additional administrative overhead. With Azure DevOps Services, Microsoft takes much of the administrative burden off your team, but you must consider your organization’s compliance requirements when deciding how the tool is being used to power your development teams. I hope the solutions I have presented in this series help your team decide which to choose.

Relational database source control, versioning, and deployments have notoriously been challenging. Each instance of the database (Dev, Test, Production) can contain different data, may be upgraded at different times, and are generally not in a consistent state. This is known as database drift.

Traditional Approach and Challenges

Traditionally, to move changes between each instance, a one-off “state-based” comparison is done either directly between the database or against a common state like a SQL Server Data Tools project. This yields a script that has no direct context to changes being deployed and requires a tremendous effort to review to ensure that only the intent of the changes being promoted/rolled back is included. This challenge sometimes leads to practices like backing up a “known good copy” aka production and restoring it to lower tiers. For any but the smallest applications and teams, this raises even more challenges like data governance and logistics around test data. These patterns can be automated, but generally do not embrace the spirit of continuous integration and DevOps.

State Based

For example, the above three changes could be adding a column, then adding data to a temporary table, and the third populating the new column with the data from the temporary table. In this scenario it isn’t only important that a new column was added, it is also how the data was added. The context of the change is lost and trying to derive it from the final state of the database is too late in the process.

DevOps Approach

Architecturally, application persistence (a database) is an aspect or detail of an application, so we should treat it as part of our application. We use continuous integration builds to compile source code into artifacts and promote them through environments. Object-Relational Mapping (ORM) Frameworks like Entity Framework and Ruby on Rails have paved the way out with a “migrations” change-based approach out of necessity. This same concept can be used for just the schema with projects like FluentMigrator. At development time the schema changes to upgrade and rollback are expressed in the framework or scripted DDL and captured in source control. They are compiled and included in the deployment artifact. When the application invokes a target database, it identifies the current version and applies any changes up or down sequentially to provide deterministic version compatibility. The application is in control of the persistence layer, not the other way around. It also forces developers to work through the logistics (operations) of applying the change. This is the true essence of DevOps.

Migration Scripts

In the same example above, the three changes would be applied to each database in the same sequence and the intent of the change would be captured in both.

Summary

In summary, a migrations-based approach lends itself to a DevOps culture. It may take some additional effort up front to work through and capture how changes should be applied, but it allows all aspects of the database deployment process to be tested throughout a project lifecycle. This promotes repeatability and ultimately the confidence needed to perform frequent releases.

In my previous post, I discussed lessons learned about migrating SQL Server databases from on-premise to Azure SQL Databases. This post will share several of my issues and solutions around automating Azure resource provisioning and application code deployment. Before this project, I had never used Azure DevOps or Team Foundation Server for anything except a code repository or task tracking. Most of these applications would use Build and Release pipelines to configure the Azure subscription and deploy the applications. Besides, this was my first foray into the Azure Resource Manager (ARM) templates.

Azure Resource Manager (ARM) Templates

ARM templates provide a declarative method for provisioning and configuring Azure resources. These JSON files use a complex schema that provides deep configuration options for most of the available resources available in Azure. Some of the key benefits of ARM templates are:

  • Idempotent: templates can be redeployed many times returning the same result. If the resource exists, changes will be applied, but the resource will not be removed and recreated.
  • Intelligent processing: Resource Manager can manage the order of resource deployment, provisioning resources based on dependencies. It will also process resources in parallel whenever possible, improving deployment times.
  • Modular templates: JSON can get incredibly hard to maintain when files are large. ARM provides Linked Templates to split resources into separate files to simplify maintenance and readability. This also provides the opportunity to reuse some templates in multiple deployment scenarios.
  • Exportable: Resources have an option to export the template in Azure Portal. This is available if you’re creating a new resource at the validation stage or in the resource’s management pane. A Resource Group also provides a method for exporting the template for multiple resources. This was very useful for understanding more advanced configurations.

For the first few projects, I built large templates that deployed several resources. This presented several hurdles to overcome. First, large templates are hard to troubleshoot. Unlike scripting solutions, no debugger allows you to step through the deployment. The portal does provide logging, but some of the error descriptions can be vague or misleading especially in more advanced configuration settings. In situations where there are multiple instances of one type of resource, there may be no identifying information for which instance caused the error. In theory, linked templates would be a way to handle this, but linked templates require a publicly accessible URI. The client’s security rules did not allow this level of exposure. The best solution was to add resources one at a time, testing until successful before adding the next resource.

I also had some minor issues with schema support in Visual Studio Code. The worst of it was false positive warnings on the value of the “apiVersion” property for a resource not being valid despite the schema documentation showing that it is. This didn’t cause any deployment errors, just the yellow “squiggly” line under that property. Another inconvenience to note is when exporting a template from the portal, not all resources will be in the template. This was most noticeable when I was trying to find the right way of adding an SSL certificate to an App Service. The App Service had a certificate applied to it, but the template did not include a resource with the type, Microsoft.Web/certificates.

While templates are supposed to be idempotent, there are some situations where this isn’t the case. I found this out with Virtual Machines and Managed Disks. I had a template that was creating the disks and attaching them to the VM but found out later that the disk space was too small. Changing the “diskSizeGB” property and re-deploying fails because attached disks are prohibited from resizing. Since this isn’t likely to happen when we get out of the lab environment, I changed the sizes in the portal by deallocating the VM and changing the size.

Azure & TFS Pipelines

Azure Pipelines, part of the Azure DevOps Services/Server offering, provides the capability to automate building, testing, and deploying applications or resources. Team Foundation Server (TFS) is the predecessor to Azure DevOps also offers pipeline functionality. These projects use both solutions. Since there is no US Government instance of Azure DevOps Services, we use TFS 2017 deployed to a virtual machine in their environments, but we use Azure DevOps Services for our lab environment since it’s less expensive than standing up a VM. While the products are very similar, there are some differences between the two systems.

First, TFS does not support YAML. YAML allows using and managing a configuration in the source code as it becomes part of the repository making it more integrated into Git Flow (versioning, branching, etc). Also, YAML is just text. Editing text is much quicker than having to click through tasks to change textbox values.

Another difference is being able to change release variables in queue time. Azure DevOps release variables can be flagged “Settable at release time.” This is very useful as we typically have at least two, if not three, instances of an application running in a lab environment. A variable for the environment can be added and set at release time making a single pipeline usable for all the environments instead of having to either edit the pipeline and change the value or create multiple pipelines that do essentially the same thing.

Create a New Release

There were some positive lessons learned while working with both services. Pipelines could be exported and imported between the two with only minor modifications to settings. Since we were using the classic pipeline designer, there are far more mouse clicks to create pipelines. Exporting them generates a JSON file. Once imported into another environment, there were usually one or two changes that had to be made because the JSON schema uses IDs to reference other resources, such as Task Groups, instead of a name. Not only did this save time, but it cut down on human error when configuring the tasks.

Task Groups provide a way to group a set of tasks to be reused across multiple pipelines and can be exported, too. In some projects, we had multiple pipelines that were essentially the same process, but with different settings on the tasks. For example, one client had eight web applications that deploy to a single Azure App Service. One was at the root while the others were in their virtual directories. Also, they each had their development cycle so it wouldn’t make sense to deploy all eight every time one needed updates. We created a build and release pipeline for each application. Creating two Task Groups, one build, and one release, allowed us to add a reference to the appropriate group in each pipeline and just change the parameters passed to it.

New Pipeline

Conclusion

Automating resource provisioning and application deployment saves time and creates a reliable, reusable process over a manual alternative. ARM templates provide deep, complex customization even more than Azure PowerShell or the Azure CLI, in some cases. Pipelines then take those templates and consistently provision those resources across environments. It would have made life much easier in several of my past roles. While there were some “gotchas” with the various technologies, Azure has been developed with automation being a top priority.

One of our clients has been pushing big to migrate all the application infrastructure to Azure. Some of the applications have been using on-prem file servers and we have been looking at different options available to migrate these file shares to Azure. We looked at Azure Blob storage, Azure Files and Azure Disks to find out the most fitting solution for us that would offer high performance, permissions at the folder level, and long-term backup retention.

Although Azure Blob storage is great for massive-scale storage, it was easy to dismiss it as our applications were using the native file system and didn’t have to support any streaming and random-access scenarios. Azure Files offered fully managed file shares in the cloud. Unlike Azure Blob storage, Azure Files offer SMB access to Azure file shares. By using SMB, we could mount an Azure file share directly on Windows, Linux, or macOS, either on-premises or in cloud VMs, without writing any code or attaching any special drivers to the file system. It provided windows like file-locking but there were some limitations with Azure Files e.g., we don’t have folder/file level control over the permission and the only way to accomplish that was to create shared access signature on folders where we could specify read-only or write-only permissions, which didn’t work for us. Azure Files Backup provided an easy way to schedule backup and recovery. However, the important limitation was that backups could retain files only for a maximum of 180 days.

Azure Disk fulfilled all our requirements. Although running a file server with Azure Disk as back-end storage was much more expensive than Azure File share but it was the most high-performance file storage option in Azure, which was very important in our scenario as files were used in real-time under heavy load. For compliance and regulatory reasons, all files needed to be backed up, which could be easily done by Azure Virtual Machine Backup without any additional maintenance. The only limitation of Azure Virtual Machine Backup was that it only supports disks that are less than 4 TB. So, in the future, if a need arises for additional storage that meant having multiple disks in a striped volume. Also, after implementing the file server in Azure VM, we could still get the best of both Azure Files and Data Disk by using Azure File Sync. Having a file server and Azure file share in the Sync group would ensure minimal duplication and set the volume of free space. So finally, we decided to deploy a file server in Azure Windows VM with premium SSD.

Troubleshooting

After we deployed the file server in Azure Virtual Machine everything worked like a charm and we had found an ideal solution in Azure for our file servers. However, after some time we used to encounter intermittent issues in the VM, CPU usage would near 100 percent, shares would become inaccessible and VM would hang at OS level bringing everything to a halt.

CPU Average

Generally, to troubleshoot issues in Azure VM, we can connect to a VM using the below tools:

  • Remote CMD
  • Remote PowerShell
  • Remote Registry
  • Remote services console

However, the issue we were encountering hung the OS per se so we could not use the above tools to troubleshoot the VM. There was not even any event log generated that would indicate the possible guest OS hung situation. So, the only option left was to generate the memory dump to find the root cause of the issue. Now, I will explain how to configure the Azure VM for a crash dump and how to trigger the NMI (Non-Maskable Interrupt) crash dumps from the serial console.

Serial Console is a console for Azure VMs that can be accessed from the Azure portal for VMs that have been deployed using the resource management deployment model. It connects directly to COM1 serial port of the VM. From Serial Console we can start a CMD/PowerShell session or send an NMI to VM. NMI creates a signal that VM cannot ignore so it is used as a mechanism to debug or troubleshoot systems that are not responding.

To enable Serial Console, we need to RDP into the VM and run the below commands.

bcdedit   /ems {default} on
bcdedit   /emssettings EMSPORT:1 EMSBAUDRATE:115200

We also need to execute the below commands to enable boot loader prompts.

bcdedit    /set {bootmgr} displaybootmenu yes
bcdedit    /set {bootmgr} timeout 7
bcdedit    /set {bootmgr} bootems yes

When the VM receives an NMI, its response is controlled by the VM configuration. Run the below commands in Windows CMD to configure it to crash and create a memory dump file when receiving an NMI.

REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\CrashControl" /v DumpFile /t REG_EXPAND_SZ /d "%SystemRoot%\MEMORY.DMP" /f
REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\CrashControl" /v NMICrashDump /t REG_DWORD /d 1 /f
REG ADD "HKLM\SYSTEM\CurrentControlSet\Control\CrashControl" /v CrashDumpEnabled /t REG_DWORD /d 1 /f

Before we can use Serial Console to send NMI, we also need to enable Boot diagnostics of VM in Azure portal. Now we are all set to generate the crash dump. Whenever the VM hangs all we need to do is open the Serial Console in the Azure portal, wait till it displays SAC prompt, and hit the Send Non-Maskable Interrupt (NMI) from the menu, as shown below.

Serial Console in Azure Portal

In our case, we have been using FireEye service for security purposes. After we analyzed the memory dump it was found the FeKern.sys (FireEye) is stuck waiting for a spinlock and the driver exhausting CPU % time. FeKern.sys is a file system filter driver that intercepts the requests and in this case, was not able to handle the load. There can be varied reasons for unresponsive VMs and NMI crash dumps can be of great help to troubleshoot and resolve the issues.

Happy Computing!
Sajad Deyargaroo

This blog post is for all developers of all levels that are looking for ways to improve efficiency and time-saving ideas. It begins by providing some background on me and how my experience with Microsoft Excel has evolved and aided me as a developer. Next, we cover a scenario where Excel can be leveraged to save time. Finally, we go over a step-by-step example using Excel to solve the problem.

Background

As a teenager growing up in the 80s, I was fortunate enough to have access to a computer. One of my favorite applications to use as a kid was Microsoft Excel. With Excel, I was able to create a budget and a paycheck calculator to determine my meager earnings from my fast food job. As my career grew into software development, leveraging all of the tools at my disposal as a solution against repetitive and mundane tasks made me more efficient. Over the years, colleagues have seen solutions I have used and have asked me to share how I came up with and implemented them. In this two-part blog post, I will share the techniques that I have used to generate C#, XML, JSON, and more. I will use data-loading in Microsoft Power Apps and Dynamics as a real-word example; however, we will need to start with the basics.

The Basics

Before going into the data-loading example, I wanted to provide a very simple example. Keep in mind that there may be more effective solutions to this specific example that do not use Excel; however, I am using it to illustrate this simple example. Let’s say you had a data model and a contact model that, for the most part, were the same with the exception of some property names, and you needed to write methods to map them. You know the drill:

var contact = new Contact();
contact.FirstName = datamodel.firstName;
contact.LastName = datamodel.lastName;
contact.PhoneNumber = datamodel.phoneNumber;
contact.CellPhone = datamodel.mobileNumber;

Not a big deal, right? Now let’s say you have a hundred of these to do and each model may possibly have 50+ properties! This would very quickly turn into a time consuming and mundane task; not to mention you would likely make a typo along the way that another developer would be sure to let you know about in the next code review. Let us see how Excel could help in this situation.

In this scenario, the first thing you will need is the row data for the contact and data models. One way would be using the properties. Consider the classes below:

Use Properties to Identify Classes

  1. Create 3 Excel worksheets called Primary, Secondary, and Generator
  2. Copy/paste the property statements from Contact into Primary worksheet and ContactDataModel into a Secondary worksheet.
  3. Select Column A in the Primary worksheet
    Create three Excel Worksheets
  4. In Excel, select the Data tab and then Text to Columns
  5. Choose Delimited, then Next
    Choose Delimited
  6. Uncheck all boxes and then check the Space checkbox, then Finish
    Uncheck All Boxes
  7. Your worksheet should look like the following:
    Sample of Worksheet
  8. Repeat 3-7 with the Secondary worksheet
  9. Select cell A1 and then press the = key
  10. Select the Primary worksheet and then cell D1
  11. Press the Enter key, you should return to the Generator worksheet and the text “FirstName” should be in cell A1
  12. Select cell B1 and then press the = key
  13. Select the Secondary worksheet and then cell D1
  14. Press the Enter key, you should return to the Generator worksheet and the text “firstName” should be in cell A1
  15. Drag and select A1:B1. Click the little square in the lower-right corner of your selection and drag it down to row 25 or so. (Note: you would need to keep dragging these cells down is you added more classes.)
    You will notice that by dragging the cells down, it incremented the rows in the formula.
    Incremented Rows in the Formula
    Press CTRL+~ to switch back to values.
  16. Select cell C1 and enter the following formula:
    =IF(A1=0,””,A1 & “=” &B1&”;”)
    As a developer, you probably already understand this, but the if statement is checking to see if A1 has a value of 0 and simply returns an empty string if so. Otherwise, string concatenation is built.
  17. Similar to an earlier step, select cell C1 and drag the formula down to row 25. Your worksheet should look like:
    Select and Drag Formula
  18. You can now copy/paste the values in column C into the code:
    Copy and Paste Values into Column C

As you continue on, Excel keeps track of the most recent Text to Columns settings used; so, if you pasted another set into the Primary and Secondary worksheets, you should be able to skip steps 1-5 for remaining classes. In the sample class file and workbook, I have included Address models as an illustration.

Next Steps

This example has covered the basic concepts of code generation with Microsoft Excel: extracting your data and writing the formulas that generate the necessary code. Depending on what you are trying to accomplish, these requirements may grow in complexity. Be sure to consider the time investment and payoff of using code generation and use where it makes sense. One such investment that has paid off for me is data loading in Microsoft Power Apps which we will cover in the next post: Code Generation with Microsoft Excel: A data-loading exercise in Microsoft Power Apps.

Download Example Workbook

Download Address Models