Blazor is coming! In fact, it’s coming sooner than later. Check out ASP.NET’s blog post from April 18, 2019 announcing the official preview.

What is Blazor?

by Vivek Gunnala 

Blazor is a new interactive .Net Web framework, which is part of the open-source .Net platform. Blazor uses C#, HTML, CSS and Razor components instead of JavaScript. It’s built on open web standards without the need for any plugins or code transpilation, and it works on all modern web browsers, hence called “.Net in the browser”, the C# code is directly run on the browser using WebAssembly. Both client-side code and server-side code is developed in C#, which allows you to reuse code and libraries between both sides, such as validations, models, etc.

Apps built in Blazor can use existing .Net libraries by leveraging .Net Standard, allowing the same code to be used across platforms. Since it is an experimental project, Blazor is evolving rapidly with over 60,000 contributors.

About WebAssembly

At a high-level, WebAssembly is explained on the official site as, “a binary instruction format a stack-based virtual machine. It is designed as a portable target for compilation of high-level languages, enabling deployment on the web for client and server applications.”

Should I Use Blazor For My Next Project?

by Ash Tewari 

Blazor’s development status has been promoted from an “Experimental” project to a committed product. This is great news. Blazor is available now as an official preview. Let’s review the factors you should consider when making decisions about adopting Client-Side Blazor for your next production project.

Mono.wasm (The .NET runtime compiled into WebAssembly executing your .NET assemblies in the browser) does not interact with the DOM directly. It goes through JS Interop, which is expensive. The areas where .NET Code will get the most net benefit is in the Model and Business Logic, not the DOM manipulation. If your application is very chatty with the DOM, you might need to carefully assess whether you are getting the expected performance boost from WebAssembly execution of your .NET assemblies. [https://webassemblycode.com/webassembly-cant-access-dom/]

Currently, only the mono runtime is compiled to WebAssembly. Your .NET code is executed as-is. This means that your .NET code is essentially going through two interpreters and it has a noticeable performance impact. There is work being done to compile .NET assemblies to wasm. That and other related improvements in linking and compiling) is expected to improve the performance. The fact that Microsoft has decided to commit Blazor as a product indicates that there is confidence that these performance improvements are likely to become a reality.
[https://www.mono-project.com/news/2018/01/16/mono-static-webassembly-compilation/, https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md]

In the client-side hosting model, your code is still running in the browser sandbox. So, you don’t have any access to FileSystem and other OS libraries. This limitation applies to javascript as well. In fact, WebAssembly is executed by the Javascript runtime. Yes, the same runtime which is executing the javascript in your web application.
[https://medium.com/coinmonks/webassembly-whats-the-big-deal-662396ff1cd6]

Well, if WebAssembly is executed by the same Javascript runtime, then where are the performance gains everyone is touting about coming from? The answer is that the performance gains come from skipping the parsing steps and/or optimizing compilation steps. The WebAssembly is decoded and JITed instead of parsed and compiled before the JIT step. However, there is work still ongoing to make .NET IL interpretation reach the performance levels required to fulfill the promises
[https://hacks.mozilla.org/2017/02/what-makes-webassembly-fast/]

Remember that your Blazor code executes in the UI thread of the browser, which can create a bottleneck if your application is CPU bound. Ironically, the CPU/computationally intensive applications are also one of the most compelling use-cases for Blazor. You may need to look into running Blazor components in the Web Worker. We will cover this in a separate blog post dedicated to this technique.

Server-Side Blazor

by Sean McGettrick 

Server-side Blazor, previously referred to as Razor Components, allows developers the same freedom to create UI components using C# instead of Javascript that client-side Blazor does. The primary difference being that the code is hosted on the server instead of the browser. Blazor components and application logic written to run client-side can also be used server-side.

Razor Components support all the functionality a front-end developer would expect in a modern library including:

  • Parameterization
  • Event handling
  • 2-way data binding
  • Routing
  • Dependency injection
  • Layouts
  • Templating
  • CSS cascading

Razor Components can be nested and reused, similar to React.

Differences from Client-Side

With server-side Blazor, all components are hosted and served from an ASP.NET Core server instead of being run in the browser via WASM. Communication between client and server are handled via SignalR.

Further differences between client and server-side Blazor will be outlined in the next two sections.

Advantages

Server-side Blazor offers a number of advantages over its client-side counterpart. These include:

  • No WASM dependencies. Older desktop browsers and some current mobile browsers lack support for WASM. Since server-side Blazor only requires the browser to be able to support Javascript it can run on more platforms.
  • Building on the last point, since the components and application logic sit server-side, the application is not restricted to the capabilities of the browser.
  • Developing the application on an entirely server-based platform allows you access to more mature .NET runtime and tooling support.
  • Razor components have access to any .NET Core compatible API.
  • Application load times in the browser are faster due to a smaller footprint. Only the SignalR Javascript code required to run the application is downloaded to the client.

Disadvantages

There are, however, some disadvantages to using server-side Blazor:

  • There is higher application latency due to user interactions requiring a network round-trip between the browser and the server.
  • Since the application is entirely hosted on the server, there is no offline support. If the server goes down, the application will not function which breaks one of the core tenets of building a Progressive Web Application (“Connectivity independent: Service workers allow work offline, or on low-quality networks”).
  • With the server being responsible for maintaining client state and connections, this can create difficulty in scaling the application since the server is doing all the work.
  • The application must be hosted on an ASP.NET Core server.

Server-Side Blazor Code Re-Use, Razor Pages to Blazor using an MVVM approach

by Adam Vincent 

What is MVVM?

In a nutshell, MVVM is a design pattern derived from the Model-View-Presenter (MVP) pattern. The Model-View-Controller (MVC) pattern is also derived from MVP, but where MVC is suited to sit on top of a stateless HTTP protocol, MVVM is suited for user interface (UI) platforms with state and two-way data binding. MVVM is commonly implemented in Desktop (WPF / UWP), Web (Silverlight), and Mobile (Xamarin.Forms) applications. Like the other frameworks, Blazor acts much like a Single Page Application (SPA) that has two-way data binding and can benefit from the MVVM pattern. So whether you have existing MVVM code in the form of a WPF or mobile application, or are starting green with new code, you can leverage MVVM to re-use your existing code in Blazor, or share your code with other platforms.

You can find more information on MVVM on Wikipedia.

Example Presentation Layer

BindableBase 

At the heart of MVVM is the INotifyPropertyChanged interface which notifies clients that a property has changed. It is through this interface that converts a user interaction into your code being called. Usually, all ViewModels, and some Models will implement INotifyPropertyChanged therefore, it is common to either use a library (Prism, MVVM Light, Caliburn) or to create your own base class. What follows is a minimal implementation of INotifyPropertyChanged.

public abstract class BindableBase : INotifyPropertyChanged
{
    protected bool SetField<T>(ref T field, T value, [CallerMemberName] string propertyName = null)
    {
        if (EqualityComparer<T>.Default.Equals(field, value)) return false;
        field = value;
        OnPropertyChanged(propertyName);
        return true;
    }
 
    public event PropertyChangedEventHandler PropertyChanged;
 
    protected void OnPropertyChanged([CallerMemberName] string propertyName = null)
    {
        PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
    }
}

In this simplified model class, which derives from BindableBase, we have a CustomerModel with a single property FirstName. In this context we would probably have a customer filling out an input within a form on a website where they must fill in their first name. This input would be bound to an instance of CustomerModel on the ViewModel. While the customer is filling out the form, since we are in a two-way data binding scenario, each time the customer enters or removes a character from the form’s input box, SetField() is called and will cause the PropertyChanged event to fire.

public class NewCustomerModel : BindableBase
{
    private string firstName;
    
    public string FirstName
    {
        get => firstName;
        set
        {
            SetField(ref firstName, value);
        }
    }
}

Learn More: If you need to know more about INotifyPropertyChanged the Microsoft Docs cover this topic very well.

Model

With INotifyPropertyChanged out of the way, here is the entire presentation model.

public class NewCustomerModel : BindableBase
{
    [Display(Name = "Customer Number")]
    public string CustomerNumber { get; set; }
 
    [Display(Name = "Full Name")]
    public string FullName => $"{FirstName} {LastName}";
 
    private string firstName;
    [Required]
    [Display(Name = "First Name")]
    public string FirstName
    {
        get => firstName;
        set
        {
            SetField(ref firstName, value);
            OnPropertyChanged(nameof(FullName));
        }
    }
 
    private string lastName;
    [Required]
    [Display(Name = "Last Name")]
    public string LastName
    {
        get => lastName;
        set
        {
            SetField(ref lastName, value);
            OnPropertyChanged(nameof(FullName));
        }
    }
 
    [Display(Name = "Address")]
    public string Address => $"{Street}, {City}, {State} {PostalCode}";
 
    private string street;
 
    [Required]
    [Display(Name = "Street Address")]
    public string Street
    {
        get => street;
        set
        {
            SetField(ref street, value);
            OnPropertyChanged(nameof(Address));
        }
    }
    private string city;
 
    [Required]
    [Display(Name = "City")]
    public string City
    {
        get => city;
        set
        {
            SetField(ref city, value);
            OnPropertyChanged(nameof(Address));
        }
    }
    private string state;
 
    [Required]
    [Display(Name = "State")]
    public string State
    {
        get => state;
        set
        {
            SetField(ref state, value);
            OnPropertyChanged(nameof(Address));
        }
    }
    private string postalCode;
 
    [Required]
    [Display(Name = "Zip Code")]
    public string PostalCode
    {
        get => postalCode;
        set
        {
            SetField(ref postalCode, value);
            OnPropertyChanged(nameof(Address));
        }
    }
}

There are a few things to point out in this presentation model. First, please note the use of the Data Annotation attributes such as [Required]. You can decorate your properties to provide rich form validation feedback to your users. When the customer is filling out a form and misses a required field it will not pass the model validation. This will prevent the form from being submitted as well as provide an error message if one is configured. We will cover this more in the View section

The next thing I wanted to point out is I’ve covered SetField() in the INotifyPropertyChanged section, but there is an additional bit of complexity.

[Display(Name = "Full Name")]
public string FullName => $"{FirstName} {LastName}";

Note that the FullName property is a { get; }-only concatenation of the customer’s first and last name. Since we are forcing the customer to fill out first and last name in a separate form field, changing either the first or last name causes the FullName to change. We want the ViewModel to be informed of any changes to FullName.

private string firstName;
[Required]
[Display(Name = "First Name")]
public string FirstName
{
    get => firstName;
    set
    {
        SetField(ref firstName, value);
        OnPropertyChanged(nameof(FullName));
    }
}

After the SetField() is invoked in the base class, there is an additional call to OnPropertyChanged(), which lets the ViewModel know that in addition to FirstName changing, FullName has also changed.

Example ViewModel Interface

The example ViewModel below will expand on the model above. We’ll be using a simplified user story of “Creating a New Customer.”

Blazor supports .NET Core’s dependency injection out of the box, which makes injecting a ViewModel very simple. In the following ViewModel interface, we’ll need our concrete class to have an instance of NewCustomer as well as a method which knows how to create a new customer.

public interface ICustomerCreateViewModel
{
    NewCustomerModel NewCustomer { get; set; }
    void Create();
}

And the concrete implementation of ICustomerCreateViewModel:

public class CustomerCreateViewModel : ICustomerCreateViewModel
{
    private readonly ICustomerService _customerService;
 
    public CustomerCreateViewModel(ICustomerService customerService)
    {
        _customerService = customerService;
    }
 
    public NewCustomerModel NewCustomer { get; set; } = new NewCustomerModel();
 
    public void Create()
    {
        //map presentation model to the data layer entity
        var customer = new NewCustomer()
        {
            CustomerNumber = Guid.NewGuid().ToString().Split('-')[0],
            FullName = $"{newCustomer.FirstName} {NewCustomer.LastName}",
            Address = $"{newCustomer.Address}, {NewCustomer.City}, {newCustomer.State} {NewCustomer.PostalCode}"
        };
 
        //create
        _customerService.AddNewCustomer(customer);
    }
}

ViewModel Deep-Dive

In the constructor, we’re getting an instance of our ICustomerService which knows how to create new customers when provided the data layer entity called NewCustomer.

I need to point out that NewCustomer and NewCustomerModel serve two different purposes. NewCustomer, a simple class object, is the data entity used to persist the item. NewCustomerModel is the presentation model. In this example, we save the customer’s full name as a single column in a database (and is a single property in NewCustomer), but on the form backed by the NewCustomerModel presentation model, we want the customer to fill out multiple properties, ‘First Name’ and ‘Last Name’.

In the ViewModel, the Create() method shows how a NewCustomerModel is mapped to a NewCustomer. There are some tools that are very good at doing this type of mapping (like AutoMapper), but for this example the amount of code to map between the types is trivial. For reference, what follows is the data entity.

public class NewCustomer
{
        public string CustomerNumber { get; set; }
        public string FullName { get; set; }
        public string Address { get; set; }
}

Opinionated Note: Presentation models and data entities should be separated into their respective layers. It is possible to create a single CustomerModel and use it for both presentation and data layers to reduce code duplication, but I highly discourage this practice.

View

The last and final piece to the MVVM pattern is the View. The View in the context of Blazor is either a Page or Component, which is either a .razor file, or a .cshtml file and contains Razor code. Razor code is a mix of C# and HTML markup. In the context of this article, our view will be a customer form that can be filled out. There is also a button that calls the ViewModel’s Create() method when the form has been filled out properly according to the validation rules.

@page "/customer/create"
@using HappyStorage.Common.Ui.Customers
@using HappyStorage.BlazorWeb.Components
@inject Microsoft.AspNetCore.Components.IUriHelper UriHelper
@inject HappyStorage.Common.Ui.Customers.ICustomerCreateViewModel viewModel
 
<h1>Create Customer</h1>
 
<EditForm Model="@viewModel.NewCustomer" OnValidSubmit="@HandleValidSubmit">
    <DataAnnotationsValidator />
    <ValidationSummary />
    <div class="form-group">
        <h3>Name</h3>
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.FirstName)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.FirstName" />
 
        <LabelComponent labelFor="(() => viewModel.NewCustomer.LastName)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.LastName" />
    </div>
    <div class="form-group">
        <h3>Address</h3>
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.Street)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.Street" />
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.City)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.City" />
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.State)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.State" />
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.PostalCode)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.PostalCode" />
    </div>
    <br />
    <button class="btn btn-primary" type="submit">Submit</button>
    <button class="btn" type="button" onclick="@ReturnToList">Cancel</button>
</EditForm>

The first thing to note is at the top of the code. This is how we use dependency injection to get an instance of our ViewModel.

@inject HappyStorage.Common.Ui.Customers.ICustomerCreateViewModel viewModel

Easy! Next, we need to create the form. The  needs an instance of a model to bind to, our NewCustomer ViewModel, and a method to call when the user submits a valid form.

<EditForm Model="@viewModel.NewCustomer" OnValidSubmit="@HandleValidSubmit">
...
</EditForm>

Next, we bind each property to their respective input fields. Blazor has some built-in   helpers which help you accomplish the binding. They are still under development and you may find some features are lacking at the time of writing. Please refer to the docs in the note below for more up-to-date info.

Note: The  is something I’ve created as a replacement for the asp-for  tag-helper that retrieves the DisplayAttribute from the presentation model classes. That code is available in the GitHub repository listed at the top.

<LabelComponent labelFor="@(() => viewModel.NewCustomer.FirstName)" />
<InputText class="form-control" bind-Value="@viewModel.NewCustomer.FirstName" />
 
<LabelComponent labelFor="(() => viewModel.NewCustomer.LastName)" />
<InputText class="form-control" bind-Value="@viewModel.NewCustomer.LastName" />

The magic here is bind-Value which binds our  text box to the value of the ViewModel’s instance of the NewCustomerModel presentation model.

Note: You can view full documentation on Blazor Forms and Validation here.

Last but not least, we’ll need some code to call our ViewModel’s Create() method when the form is submitted and valid. We’ll also need the onclick=ReturnToList I’ve defined for the Cancel button.

@functions {
    private void HandleValidSubmit()
    {
        viewModel.Create();
        ReturnToList();
    }
 
    private void ReturnToList()
    {
        UriHelper.NavigateTo("/customers");
    }
}

Conclusion

That’s it! In summary, I’ve covered what MVVM is, how Blazor can benefit from it, as well as an in-depth look at a simple example of how we can create a form with validation and rich feedback to the user. It is also important to reiterate that this example works not only in Blazor but can also be used in Windows Presentation Foundation (WPF) desktop applications as well as on other platforms. Please check out the GitHub repository as I continue to develop and expand on this concept.

Developer Gotchas

by Morgan Baker 

Working with a new framework like Blazor always has its learning experiences. The goal of this section is to help alleviate headaches by providing common problems and solutions we encountered with Blazor.

  • My localization isn’t working!
    For this problem, check your route parameters. Depending on the type of parameter, the invariant culture is used by the route by default, allowing for no localization for URLs. This can be solved by allowing the parameter to be passed in as any type, and then validating the type in C# code before using it.
  • I can’t debug my C# code!
    Server-side debugging for Blazor doesn’t exist yet, but you’ll still be able to debug the whole application (assuming your server-side is using ASP.NET Core).
  • I can’t see my C# in the browser!
    C# code in Blazor is compiled through WebAssembly before being delivered to the browser. When this happens, the C# can’t be displayed in the browser. However, you can still see the code in Chrome through remote debugging. Follow these steps.
  • Why isn’t my new route working?
    Most of the time you’ll need to rebuild the application to get new routes on development applications. Other causes might be naming problems or a problem with the route parameter types.
  • Everything seems to be loading slow
    This can be multiple issues, some of which are not Blazor-specific. However, for the Blazor-specific issues, it varies between server and client. Any page using server-side Blazor must make an HTTP call to the server, which deals a hit to performance. Any site using client-side Blazor will have a long initial load time, then be more relaxed later.
  • I’m seeing a blank page and I set everything up correctly!
    This is a specific one that I ran into when first using the templates in Visual Studio 2019. The solution was making sure I had the right .NET Core SDK installed. You can have the wrong version and still create a Blazor website with no errors, at least until the app starts running. You can install the latest version of the .NET Core SDK here.

Online Resources

by JP Roberts III 

As of the writing of this blog post, Blazor is still a new framework, and as such, is still changing rapidly. Pluralsight doesn’t have any courses covering Blazor, Udemy only has a couple of short videos, and Microsoft’s Learning site has no specific courses dedicated to Blazor.

However, there are several websites that have a good deal of information and samples for developers:

YouTube also has several informative videos on Blazor, such as a two-part series on the Microsoft Visual Studio channel: Blazor – Part 1 , Blazor – Part 2.

Some Updates for Global Azure Virtual Network (VNet) Peering in Azure

Last year, I wrote a blog post discussing Global VNet Peering in Azure to highlight the capabilities and limitations. The use of global peering at that time was significantly different in capability from local peering and required careful consideration before including in the design. Microsoft is continually adding and updating capabilities of the Azure platform, and the information from my original post requires updates to describe the current state of VNet peering.

The virtual networks can exist in any Azure public cloud region, but not in Azure national clouds.

Update – Global VNet peering is now available in all Azure regions, including Azure national clouds. You can create peering between VNets in any region in Azure Government, and peering can exist between US DoD and US Gov regions. The peering can span both regions and subscriptions.

Azure's Global Footprint
The above image shows Azure regions and the global footprint.

In Azure commercial, a peering can also be created between VNets in different Azure Active Directory tenants using PowerShell or command-line interface (CLI). This requires configuring an account with access in both tenants with at least the minimum required permissions on the VNets (network contributor role). In Azure Government, this is not currently possible and peered VNets must exist under the same Azure Active Directory tenant.

Resources in one virtual network cannot communicate with the IP address of an Azure internal load balancer in the peered virtual network.

Update – This limitation existed with the available load balancer at that time. Load balancers are now available in “Basic” and “Standard” tiers. The Basic load balancer is not accessible from a globally peered VNet. The “Standard” load balancer is accessible across global peering and has other additional features. A design can generally be adapted to replace Basic load balancers with Standard load balancers in high availability deployments where implementing global peering. Basic load balancers are a free resource. Standard load balancers are charged based on the number of rules and data processed.

Several Azure services also utilize a Basic load balancer and are subject to the same constraints. Please verify that the resources you are using for your specific design are supported.

You cannot use remote gateways or allow gateway transit. To use remote gateways or allow gateway transit, both virtual networks in the peering must exist in the same region.

Update – This is no longer accurate. A globally peered VNet can now use a remote gateway.

Transferring data between peered VNets does incur some cost. The cost is nominal within the same region. Cost may become significant when moving between regions and through gateways.

In summary, there have been significant updates to Global VNet Peering since my original post. The current capability now more closely aligns with local peering. These changes simplify network connectivity between regions, and the inclusion of multi-region redundancy makes disaster recovery more feasible.

Improve Networking and Connectivity in Your Environment. Contact AIS Today to Discuss Your Options.

Recently, I have been using the new windows terminal to run CMD and PowerShell commands while supporting clients. I wanted to write this blog to help answer some of the questions and issues I had while installing and customizing it.

Note: The New Windows Terminal is in PREVIEW. That means you may experience crashes or features may appear/disappear without warning.

The new terminal has an impressive set of features that I will detail below. These include:

  • Tabbed interface for Command Prompt and Multiple PowerShell versions
  • Support for Unicode characters so you can now use emoji in your scripts!
  • Rich customization options that can be modified to suit your preferences

Install Notes

Prerequisites

The first settings to check is your current version of Windows 10. You can find this by right-clicking Start and selecting System. Scroll down and look at the version you are currently running. You need to be running 1903 or later:

Windows 10 System Setting

Windows 10 System Settings

If you need to update to version 1903 (also called the Windows 10 May 2019 Update) you can open Windows Update by clicking Start, then Typing “Update”. Then click “Check for Update” in the Start menu. This will bring up Windows Update. From there you can click the “Check for Updates” button and apply the latest version. If no version appears you can manually download/install version 1903 here.

Install the New Terminal

Install the Windows Terminal (Preview) from the Microsoft Store. Fire that app up and search for Windows Terminal. Once you find it, click “Get” in the upper right. Ensure you are on a device in which you have logged in to with a Microsoft/Outlook.com/LiveID account. I had an issue with device authorization I had to work through before the store would allow me to download and install the terminal.

Once the install completes, you can start the terminal by clicking Start then typing Terminal. If you are a taskbar die hard (like me) you may also wish to pin the app:

Windows Terminal

The Interface

Once you fire up the terminal, you will immediately notice it looks different from a standard PowerShell or CMD shell. The tabs along the top allow you to run multiple shells in the same session. You can also link to multiple versions of PowerShell by adding additional profiles to the settings JSON file. We will cover the customization of settings in the next section.

Multiple Versions of Powershell

Customization

One of the most exciting features of the new terminal is the ability to customize it. You can set custom background images (yes, even GIFs!). You can also change text color schemes, cursor shape and color, and background opacity.

To access the configuration settings, you need to click the down arrow at the upper right of the terminal. From there click “Settings”:

Terminal Configuration Settings

That will open the profiles.json in your favorite editor. On my system that’s VSCode.

Scroll down to the Profiles section:

Terminal Profile Section

To create your own profile copy and paste the following JSON above an existing profile:

{
"startingDirectory": "%USERPROFILE%",
"guid": "{565ed1db-1474-455e-9d62-cb9fc7eb3f59}",
"name": "PowerShell",
"background": "#012456",
"colorscheme": "Campbell",
"historySize": 9001,
"snapOnInput": true,
"cursorColor": "#FFFFFF",
"cursorShape": "bar",
"commandline": "powershell.exe",
"fontFace": "Courier New",
"fontSize": 12,
"acrylicOpacity": 0.5,
"useAcrylic": false,
"closeOnExit": true,
"padding": "0, 0, 0, 0",
"icon": "ms-appdata:///roaming/pwsh-32.png"
},

From there, we can use this structure to begin our custom profile.

Important: The GUID above must be unique for each profile. You can change one character or use PowerShell Cmdlet New-GUID to generate a completely new GUID. If there are overlapping profile GUIDS, unexpected behavior will result.

Next, let’s look at the implementation details of each customization:

Acrylic Settings

The acrylic setting allows you to set the opacity to the background of the terminal. When the terminal window is in focus, there is a cool translucent effect allowing you to see the windows behind the terminal. When the window is out of focus, the opacity is cranked back up, and the terminal becomes fully opaque once more.

CMD in Focus:

CMD in Focus

CMD Out of Focus:

CMD Out of Focus

Note: If you use a background image, Acrylic needs to be disabled (set to false). As of this writing, acrylic does not support the overlay of a background image.

Background Images

You can add a background image to a specific profile. The image can be static or a GIF. You can add a background image with the addition of the following key/value pair in the profile:

"backgroundImage" : "/aisbackground.jpg",

This will change the default background to the image you have added:

CMD Background Image

Or you can get creative and add a GIF:

"backgroundImage" : "/nyan.gif",

Fun GIF

Note: If you set a GIF as a background, you should also add the following key to the profile containing the GIF:

"backgroundImageStretchMode" : "uniformToFill",

Color Schemes

There are default color schemes included if you scroll down to the Schemes area in the profile JSON. The default names you can use are (Pay attention to the case below, it matters):

  • Campbell (The new default color scheme for Windows Console)
  • One Half Dark
  • One Half Light
  • Solarized Dark
  • Solarized Light

You can modify the Hex values for the existing schemes or copy/paste one of the above and rename/edit it how you see fit.

These color schemes can also be applied to standard CMD console through the Microsoft ColorTool.

Cursor Share and Color

In addition to overall color schemes, you can modify the shape and color of the cursor in your custom profile.

Here are the other options and their settings for cursor shape:

Important: Note the camelCase, once again, these properties are case sensitive.

bar

"cursorShape" : “bar",
Cursor Shape Bar

emptyBox

"cursorShape" : "filledBox",

Cursor Shape emptyBox

filledBox

"cursorShape" : "filledBox",

Cursor Shape filledBox

underscore

"cursorShape" : "underscore",

Cursor Shape Underscore

vintage

"cursorShape" : "vintage",

Cursor Shape Vintage

I also changed the cursor color to the blue from our company style guide:

"cursorColor" : "#0F5995",

Cursor Color AIS

Icon Setting

When you click the down arrow in the new terminal, you will notice a small blank space next to the CMD/Command Prompt shell.

Terminal Icon Setting

This can also be customized. I changed mine to the AIS logo in my custom profiles:

Terminal Custom Icon

To accomplish this, edit the Icon key in the profile as follows:

"icon" : "C:/Users/Clint.Richardson/aisfavicon.png",

Command Line (Target shell):

You may have noticed in the previous section that my choices of PowerShell version also changed. This allows me to run PowerShell 5 which ships with Windows 10. Or the new PowerShell Core 6. I also wanted to add a visual queue to the background image, so I knew when I was using version 6.

PowerShell Version 5

PowerShell Version 5

PowerShell Version 6

PowerShell Version 6

To enable version 6 in a custom profile, you first need to download and install PowerShell Core 6. After that you can make the following change to the command like key:

"commandline" : "pwsh.exe",

I also added the PowerShell Core 6 Avatar to my background image. If you would like the add the image it can be found here. I also had to convert the SVG to a PNG. You can do that here.

Emoji in Scripts

Finally, there is the concept the VSCode and PowerShell Core 6/The New Windows Terminal understand Unicode characters. What does that mean to us? EMOJIS IN SCRIPTS 😁😂😜!!!!

A straightforward example of this is to show the write-host Cmdlet in VSCode. First, we form our write-host then wherever we want to insert an emoji, we press “WIN-.” On the keyboard. That’s the Windows key and the period. From that menu, we can insert our emoji into our PowerShell script.

Emojis in Script

Once we save and run the script in the New Terminal, this is what we see:

New Terminal Script

In Closing

I hope this post has helped you to understand the customization options available in the new Windows Terminal. In the future, I’m sure the customization options will receive better documentation or maybe a UI configure them.

Now get to downloading, and I hope you have as much fun making the new terminal your own!

Sound Familiar?

It’s not a sentiment you would expect from most IT decision-makers. However, it’s something we hear from an increasing number of organizations.

The benefits of a well-thought-out cloud transformation roadmap are not lost on them.

  • They know that, in an ideal world, they ought to start with an in-depth assessment of their application portfolio, in line with the best practice – “migrate your capabilities, not apps or VMs”.
  • They also realize the need to develop a robust cloud governance model upfront.
  • And ultimately, they understand the need to undertake an iterative migration process that takes into account “organizational change management” best practices.

At the same time, these decision-makers face real challenges with their existing IT infrastructure that simply cannot wait months and years for a successful cloud transformation to take shape. They can’t get out of their on-premises data centers soon enough. This notion isn’t limited to organizations with fast-approaching Data Center (DC) lease renewal deadlines or end of support products, either.

So, how do we balance the two competing objectives:

  • Immediate need to move out of the DC
  • Carefully crafted long-term cloud transformation

A Two-Step Approach to Your Cloud Transformation Journey

From our experience with a broad range of current situations, goals, and challenges, we recommend a two-step cloud transformation approach that addresses both your immediate challenges and the organization’s long-term vision for cloud transformation.

  1. Tactical “Lift-n-Shift” to the Cloud – As the name suggests, move the current DC footprint as is (VMs, databases, storage network. etc.) to Azure
  2. Strategic Cloud Transformation – Once operational in the cloud, incrementally and opportunistically move parts of your application portfolio to higher-order Azure PaaS/cloud-native services

Tactical “Lift-n-Shift” to the Cloud

Lift n Shift Approach to Cloud Transformation

On the surface, step #1 above may appear wasteful. After all, we are duplicating your current footprint in Azure. But keep in mind that step #1 is designed for completion in days or weeks, not months or years. As a result, the duplication is minimized. At the same time, step #1 immediately puts you in a position to leverage Azure capabilities, giving you tangible benefits with minimal to no changes to your existing footprint.

Here are a few examples of benefits:

  • Improve the security posture – Once you are in Azure, you tap into security capabilities such as intrusion detection and denial of service attack solely by being in Azure. Notice that I deliberately did not cite Security Information and Event Management (SIEM) tools like Azure Sentinel. Technically you can take advantage of Azure Sentinel for on-premises workloads.
  • Replace aging hardware – Your hardware may be getting old but isn’t old enough for a Capex-powered refresh. Moving your VMs to Azure decouples you from the underlying hardware. “But won’t that be expensive, since you are now paying by usage per minute?” you ask. Not necessarily and certainly not in the long run. Consider options like Reserved Instance (RI) pricing that can offer an up to 80% discount based on a one- or three-year commitment.

Furthermore, you can combine RI with Azure Hybrid Benefits (AHUB) which provides discounts for licenses already owned. Finally, don’t forget to take into account the savings from decreased needs for power, networks, real estate, and the cost of resources to manage all the on-premises assets. Even if you can’t get out of the DC lease completely, you may be able to negotiate a modular reduction of your DC footprint. Please refer to Gartner research that suggests that over time, the cloud can become cost-effective.

AMP Move out of Data Center

Source – https://blogs.gartner.com/marco-meinardi/2018/11/30/public-cloud-cheaper-than-running-your-data-center/

  • Disaster Recovery (DR) – Few organizations have a DR plan setup that is conducive for ongoing DR tests. Having an effective DR plan is one of the most critical responsibilities of IT. Once again, since geo-replication is innate to Azure, your disks are replicated to an Azure region that is at least 400 miles away, by default. Given this, DR is almost out-of-the-box.
  • Extended lease of life on out of support software – If you are running an Operating System (OS), such as Windows Server 2008 or SQL Server 2008, moving to Azure extends the security updates for up to three years from the “end of support” date.
  • Getting out of the business of “baby-sitting” database servers – Azure managed instances offer you the ability to take your existing on-premises SQL Server databases and move them to Azure with minimal downtime. Once your database is an Azure SQL Managed Instance, you don’t have to worry about patching and backup, thereby significantly reducing the cost of ownership.
  • Take baby steps towards automation and self-service – Self-service is one of the key focus areas for most IT organizations. Once again, since every aspect of Azure is API driven, organizations can take baby steps towards automated provisioning.
  • Get closer to a data lake – I am sure you have heard the quote “AI is the new electricity”. We also know that Artificial Intelligence (AI) needs lots and lots of data to train the Machine Learning (ML) algorithms. By moving to Azure, it is that much easier to capture the “data exhaust” coming out the applications in a service like Azure Data Lake. In turn, Azure Data Lake can help turn this data into intelligence.

Strategic Cloud Transformation

Strategic Cloud Transformation

Once you have completed step #1 by moving your on-premises assets to the cloud, you are now in a position to undertake continuous modernization efforts aligned to your business priorities.

Common approaches include:

  • Revise – Capture application and application tiers “as-is” in containers and run on a managed orchestrator like Azure Kubernetes Service. This approach requires minimal changes to the existing codebase. For more details of this approach, including a demo, read Migrate and Modernize with Kubernetes on Azure Government.
  • Refactor – Modernize by re-architecting to target Platform as a Service (PaaS) and “serverless” technologies. This approach requires more significant recoding to target PaaS services but allows you to take advantage of cloud provider managed services. For more information, check out our “Full PaaS” Approach to Modernizing Legacy Apps.
  • Rebuild – Complete rewrite of the applications using cloud-native technologies like Kubernetes, Envoy, and Istio. Read our blog, What Are Cloud-Native Technologies & How Are They Different from Traditional PaaS Offerings, for more information.
  • Replace – Substitute an existing application, in its entirety, with Software as a Service (SaaS) or an equivalent application developed using a no-code/low-code platform.

The following table summarizes the various approaches for modernization in terms of factors such as code changes, operational costs, and DevOps maturity.

Compare App Modernization Approaches

Azure Migration Program (AMP)

Microsoft squarely aligns with this two-step approach. At the recent Microsoft partner conference #MSInspire, Julia White announced AMP (Azure Migration Program).

AMP brings together the following:

Wrapping Up

A two-step migration offers a programmatic approach to unlock the potential of the cloud quickly. You’ll experience immediate gains from a tactical move to the cloud and long-term benefits from a strategic cloud transformation that follows. Microsoft programs like AMP, combined with over 200+ Azure services, make this approach viable. If you’re interested in learning more about how you can get started with AMP, and which migration approach makes the most sense for your business goals, reach out to AIS today.

How to use galleries to create dynamic entries in a data source in PowerApps

In this article, we will see how we can use galleries in PowerApps to create multiple rows for adding records to a data source. We will create dynamic entries in a gallery that looks like a form and adds/deletes a line/row with the press of a button.

Scenario: XYZ Inc. is a sales company that deals in sales of hardware components from the manufacturers to retailers. User A is an on-field sales agent of XYZ Inc. and uses a static application to enter the order details from a customer. This application is further connected to a SharePoint list and creates a new item on the list whenever User A enters the detail and hits the submit button. The application provides the ability to enter only one order detail at a time and User A ends up putting more effort and time in entering those details.

We designed a customized PowerApp for XYZ Inc. where User A authenticates and lands on the Order Details page. User A can view all their previous entries, search for an order by entering the name of the customer, vendor, invoice number, etc. Functionality to add details is provided within the app. User A clicks the add new orders button and a form gallery is displayed. User A can add multiple records by creating new lines with the press of a button in the form gallery. A local collection with all the entries on the form is created in PowerApps. Once User A hits the “Finish & Save” button, an item for each entry is created on the SharePoint List and the Order Details gallery is updated with these newly added records.

Let’s look at the component-wise description of the controls in the app. The schema for data on the SharePoint List is:

S.No Column Name Column Data Type
1 Title (Order Number) Single Line of Text (255 Chars)
2 Customer Single Line of Text (255 Chars)
3 Shipping Address Single Line of Text (255 Chars)
4 Billing Address Single Line of Text (255 Chars)

On the App -> OnStart option, the expression used is:

ClearCollect(DynamicGallery,{Value:1}); Clear(OrderCollection); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order1"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order2"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order3"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order4"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order5"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order6"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order7"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order8"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order9")))

Explanation: Here, I am creating a collection “Dynamic Gallery” and this is the number of rows corresponding to the gallery control for creating the new orders. I am creating another collection “OrderCollection” and this collection contains all the Order Details from the SharePoint List named “OrderDets”.

Note: The “StartsWith” function is not delegable if a variable is passed as the second argument which is the reason why I am using multiple “Collect” statement to iterate over all possible values.

Galleries to create dynamic entries in a Data Source in PowerApps1

  1. This icon is the Home Page icon and clicking on this navigates the user to the home screen
  2. This icon is the Order Details Screen icon and clicking on this navigates the user to the Order Details Screen
  3. This icon is the Edit an Item icon and clicking on this allows the user to edit a particular item
  4. This icon is the Refresh Icon and clicking on this refreshes the data source, the expression used here is:

Refresh(OrderDets);Clear(OrderCollection); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order1"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order2"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order3"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order4"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order5"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order6"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order7"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order8"))); Collect(OrderCollection,Filter(OrderDets,StartsWith(Title,"Order9")))

Explanation: This refreshes the data source (“OrderDets” SharePoint List). It also clears the existing data from the “OrderCollection” collection and refills it with the new data.

  1. This is a gallery control that populates all the items of the SharePoint list
    • The expression used in the “Text” property of the “Order Details” label is:

"Order Details Total Orders Count:"&CountRows(OrderCollection)

Explanation: This expression concatenates the simple text (wrapped in “”) with the integer returned as a result of the “CountRows” function applied on the “OrderCollection” collection.

    • The expression used in the “Items” property of the Gallery is:

Sort(Filter(OrderCollection,If(!IsBlank(TextInput3.Text),StartsWith(Title,TextInput3.Tex t) ||
StartsWith(Customer,TextInput3.Text),true)),Value(Last(Split(Title,"r")).Result),Descen ding)

  1. This is a stack of Text labels used to show the information when an item is selected in the “Order Details” gallery. Expressions used on the labels:

Customer: Gallery5.Selected.Customer, Shipping Address: Gallery5.Selected.'Shipping Address', Billing Address: Gallery5.Selected.'Billing Address'

Explanation: Each line is an individual expression that fetched the attributes of the item selected in the “OrderDetails” gallery (Gallery5).

  1. This is a button control that enables the gallery control for the user to create dynamic lines and enter the order details. Expression used on this button:

ClearCollect(DynamicGallery,{Value:1});Set(NewOrder,true);Set(ResetGallery,false);Set(ResetGallery,true)

Explanation: Here I am recreating the “DynamicGallery” collection to accommodate just one value that corresponds to one row of the newly visible dynamic control gallery. I am setting up two new variables “NewOrder” and “ResetGallery” that control the visibility/reset of this dynamic gallery control, “Total Number of New Orders” and the “Finish and Save Button” controls.

Use Galleries to create dynamic entries in a Data Source in PowerApps 2

  1. This is the dynamic gallery control that I customized for user inputs. This gallery control has four text input controls to get the values for each of the attributes of the SharePoint List. The user can create multiple lines (one at a time) to add multiple records in one go. Data from each line is directly patched to the data source to create a new item. The user can remove the line by clicking the “X” icon. Configuration of the elements of the gallery control:

Gallery Properties:
Items: DynamicGallery, Visible: NewOrder,

Explanation: “DynamicGallery” is the collection that holds count of the orders to be added. “NewOrder” is a variable used to set the visibility of the controls.

  1. This icon is to remove the current row from the dynamic gallery. The expression used on this is:

Icon Properties:
OnSelect: Remove(DynamicGallery,ThisItem), Visible: If(ThisItem.Value <>
Last(Sort(DynamicGallery,Value,Ascending)).Value,true,false)

Explanation: We are removing the current item from the gallery by pressing this button (“OnSelect”) property. This icon’s visibility is set in a way that it shows up only if the current row is not the last item of the dynamic gallery.

  1. This icon is to add a new row/ line to the dynamic gallery. The expression used on this is:

Icon Properties:
OnSelect: Collect(DynamicGallery,{Value: ThisItem.Value + 1}), Visible: If(ThisItem.Value =
Last(Sort(DynamicGallery,Value,Ascending)).Value,true,false)

Explanation: We are adding a row/line by adding an item to the dynamic gallery collection by pressing this button (“OnSelect”) property. This icon’s visibility is set such that it shows up only on the last item of the dynamic gallery.

  1. This button is to perform the patch action on the “OrderDets” SharePoint List and it patches all the entries made by the User A in the dynamic gallery. The expression used in this control is:

ForAll(
Gallery3_1.AllItems,
Concurrent(
Patch(OrderDets,Defaults(OrderDets),{Title:TextInput2_6.Text,Customer:TextInput2 _7.Text,'Shipping Address':TextInput2_4.Text,'Billing Address':TextInput2_5.Text}), Patch(OrderCollection,Defaults(OrderCollection),{Title:TextInput2_6.Text,Customer: TextInput2_7.Text,'Shipping Address':TextInput2_4.Text,'Billing
Address':TextInput2_5.Text})));
ClearCollect(DynamicGallery,{Value:1});
Set(NewOrder,false);
Refresh(OrderDets)

Explanation: In this control, the concurrent function executes two patch commands, one in the data source (“OrderDets” SharePoint List) and the other on the local collection (“OrderCollection”) based on the inputs by the User A in each of the line/ row of the dynamic gallery. The “DynamicGallery” collection is being reset to hold a single value. The variable “NewOrder” is set to “false” to toggle the visibility of the dynamic gallery and then we finally refresh the data source.

Note: We are doing a concurrent patch instead of refreshing and recollecting the data in the collection “OrderCollection” from the data source to optimize the operations in the app.

  1. This is the text label control that displays the total number of current lines/rows User A has created. The expression used here is:

"Total Number of New Orders: "& CountRows(Gallery3_1.AllItems)

Explanation: Here the text “Total number of New Orders” is being concatenated with the number of rows of the dynamic gallery.

Use Galleries to create dynamic entries in a Data Source in PowerApps 3

  1. This is the text input control of the dynamic gallery. Here I am validating the text input and checking through the “OrderCollection” if the entered Order Number already exists. If it exists, the user will get an error notification. The expression used in the “OnChange” property of this control is:

If(TextInput2_6.Text in OrderCollection.Title,Notify("Order Number already exists!",NotificationType.Error))

Explanation: Here the if condition checks if the text of the text input control exists in the “Title” column of the “OrderCollection” collection and pops an error message if the condition is met.

  1. This is the error notification generated when the user enters an existing order number in the text input.

Use Galleries to create dynamic entries in a Data Source in PowerApps 4

  1. This is the App settings page of the app where we are increasing the soft limit of the data row limit on of non-delegable queries from 500 to 2000.

In this article, I have shown a basic implementation of the dynamic galleries concept and the multiple items/ records patch function for a SharePoint data source. This can be replicated with minor changes in the expressions for other data sources such as CDS, excel, SQL, etc.

I hope you found this interesting and it helped you. Thank you for reading!

AIS is the 2019 MSUS Partner Award Winner – Business Applications – Dynamics 365 for Sales. This is our vision for the Power Platform era.

I am incredibly excited to share that AIS has been announced as the 2019 MSUS Partner Award Winner – Business Applications – Dynamics 365 for Sales at #MSInspire!

Some background on how we won:

Story for MSUS Win Dynamics 365 SalesWhen the National Football League Players Association (NFLPA) needed to score a big win for its members, they brought in the AIS team to build a single, shared player management system, called PA.NET. AIS extensively customized Dynamics 365 for Sales to meet the unique needs of NFLPA, integrated it with Office 365… and then took it all to the cloud with Microsoft Azure.

Using Dynamics 365 for Sales, PA.NET provides one master set of player data and powerful reporting tools. Now employees across the organization can turn to the same system to answer questions, uncover marketing and licensing opportunities, and identify other ways to help members. When a specific licensing request comes in, they can find the right person, or people, in minutes.

So where do we go from here? From Dynamics to Power Platform.

Our Business Applications & Automation Practice is investing heavily in Dynamics and the Power Platform. We recognize that an organization’s adoption of the Power Platform should be thought of as a journey, not a one-off “app of the moment” solution. By focusing on enterprise management and leveraging the Common Data Service (CDS) as much as possible, we help clients like NFLPA scale their adoption as they migrate workloads and make use of PowerApps, Power BI, Flow, and Dynamics 365.

Power Platform Technologies

Earlier this year, we worked with friends in the business applications community around the world to launch our Power Platform Adoption Framework. Mature organizations realize that rigor, discipline, and best practices are needed to adopt the platform at scale.

The Power Platform Adoption Framework is the start-to-finish approach for adopting the platform at scale.

It helps enterprise organizations:

  • Get to value quickly
  • Educate, train, and grow their community of developers and power users
  • Create durable partnerships between business, IT, and the user community
  • Continuously improve ROI on the platform by identifying and migrating new workloads
  • Blend agile, rapid app development with rigorous, disciplined enterprise management

I hope that the framework will continue to become a worldwide standard for enterprise-grade adoption of the Power Platform. I’ve been lucky to collaborate with Power Platform experts and users around the world to create the Power Platform Adoption Framework. I’m proud to say that AIS is fully behind the framework, sharing it with the community, and committed to its future development as best practices for scaled adoption evolve. We’re sharing it so that everyone can use it because we believe that a vibrant and thriving community around this technology is good for everyone who uses it.

Please join me in congratulating the AIS team, and please join us on this journey to scale the Power Platform to meet the challenges of the years to come.

A Single Place to Manage, Create, and ConsumeAzure Monitor and OMS

The integration of the Operations Management Suite (OMS) into Azure Monitor is completed for both Azure Commercial and Azure Government. This change by Microsoft has given Azure Monitor/OMS users a single place to manage, create, and consume Azure Monitoring solutions. No functionality has been removed and documentation has been consolidated under the Azure Monitor documentation. With this consolidation of services, there have been some terminology changes that will impact the way one talks about Azure Monitor components. The consolidation of OMS and other Azure services into Azure Monitor is simplifying the way you manage the monitoring of your Azure services.

Updated Terminology

Microsoft has updated some of the terminologies for the Azure Monitor components to reflect the transition from OMS. I have highlighted some examples:

  • The log data for Azure Monitor is still stored in a Log Analytics Workspace, but the term Log Analytics in the Microsoft documentation is now Azure Monitor Logs.
  • The term log analytics now applies to the page in the Azure portal used to write and run queries and analyze log data.
  • What was once known as OMS Management solutions have been renamed Monitoring solutions (items like Security & Compliance and Automation & Control)

Azure Monitor — Your 1 Stop “Monitoring & Alerting” Shop

Azure Monitor is now pretty much the one stop shop for your monitoring and alerting needs (the exception here would be Azure Security Center is still the place to go to for most of your security and compliance needs).

Azure Monitor is broken out into four main categories in the Azure Portal:

  1. The main components of Azure monitor
  2. Insights
  3. Settings
  4. Support + Troubleshooting.

The main components include the Activity log, Alerts, Metrics, Logs, Service Health, and Workbooks.

Under Insights, there is Application, Virtual Machines, Containers, Network, and “…More”.

The Settings category includes Diagnostics settings and Autoscale.

And finally, under Support + Troubleshooting, there is Usage & estimated costs, Advisor recommendations, and New support request.

Check out the below table that provides an overview of the Azure Monitor Components and Descriptions:

Azure Monitor Component Description
Overview Overview of Azure Monitor
Activity Log Log data about the operations performed in Azure
Alerts Notifications based on conditions that are found in monitoring data both metrics and logs
Metrics (Metrics Explorer) Plotting charts, visually correlating trends, and investigating spikes and dips in metrics’ values.
Logs (Azure Monitor Logs) Useful for performing complex analysis across data from a variety of sources
Service Health Provides a personalized view of the health of the Azure services and regions you’re using
Workbooks Combine text, Analytics queries, Azure Metrics, and parameters into rich interactive reports.
Applications Application Performance Management service for web developers
Virtual Machines Analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes.
Containers Monitor the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS).
Network Tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network.
More Replacement for the OMS Portal Dashboard.
Diagnostic Settings Configure the diagnostic setting for Azure resources (formally known as Diagnostic Logs)
Autoscale Consolidated view of Azure resources that have Autoscale enabled
Usage and estimated costs Consumption and cost estimates of Azure Monitor
Advisor Recommendations Link to Azure Advisor
New support requests Create a support request

Passing just about anything from PowerApps to Flow with the newly released JSON function

In this article, I will show you how we can send data from a Canvas App using the freshly released JSON function. I will pass data from the data table (of a SharePoint List), microphone (audio recording), and camera control (photo) to an MS Flow. A condition logic is set up in Flow to check the file type and create those accordingly in a dedicated SharePoint Library.

This article focuses on a canvas app and a flow. We will look at the component-wise structuring of both the app and the flow to achieve the objective.

Canvas App

Let’s look at the control-wise screens and functions used in the Canvas App.

  1. Data from a SharePoint list is displayed on a Gallery control in the app. A user can export this data to a PDF file and save it to SharePoint Document Library, and download it in the browser window.

Gallery Control

Here, we have a Gallery (‘Gallery2’) control that is populated with the data from a SharePoint List. The data is filtered to show only the first 10 records. The expression used on the ‘Items’ property of the Gallery control is:

FirstN(ShowColumns(OrderDets,"Title","Customer","ShippingAddress","BillingAddress"),10)

Explanation: Get the first 10 items from the ‘OrderDets’ SharePoint list and get the columns as specified.

The ‘Create PDF’ button creates a local collection and then triggers an MS Flow and passes the collection as an argument along with the desired file name using the JSON function. Finally, once the PDF is created and the Flow is executed successfully, the PDF file is opened in a new tab of the browser. The expression used on this button is:

ClearCollect(PDFCollection,Name:Concatenate("Test123",Text(Today()),".pdf"),Url:JSON(ShowColumns(Gallery2.AllItems,"Title","Customer","ShippingAddress","BillingAddress"))});Launch(CreateFilesSharePoint.Run(JSON(PDFCollection,IncludeBinaryData)).responsereturned)

Explanation: The ‘ClearCollect’ function creates a collection named ‘PDFCollection’ and this stores the data in the gallery control and the name of the PDF file. The name of the PDF file is a concatenated string with the naming convention of ‘Test123-today’s date.pdf’. The ‘URL’ key inside the ‘PDFCollection’ stores string type value for the table formatted Gallery items, using the JSON function. This value is later parsed as JSON while sending as an argument to the Flow. The ‘Launch’ function opens a new browser window to launch the newly created PDF file’s URL received as a response from the ‘CreateFilesSharePoint’ flow.

  1. The Microphone control on the app is used to record audio. Multiple recordings can be created and played/viewed on the gallery control.

Microphone Gallery Control

Here, we have a Microphone control ‘Microphone1’ to record the audio inputs and store that into a local collection ‘AudioCollection’. The Expression used on the ‘OnStop’ property of the Microphone control is:

Collect(AudioCollection{Name:Concatenate("Audio",Text(Today()),Text(CountRows(AudioCollection)),".mp3"),Url:Microphone1.Audio})

Explanation: The ‘Collect’ function updates a collection ‘AudioCollection’ to store the audio recordings with the unique file name. The filename is a concatenated string of ‘Audio-Today’s date-index of the audio file.mp3’.

The ‘Submit’ button triggers the Flow and creates all the audio recordings as separate files on the SharePoint document library. The Expression used on this button is:

CreateFilesSharePoint.Run(JSON(AudioCollection,JSONFormat.IncludeBinaryData))

Explanation: Here the JSON function converts the audio file URL to binary data and sends the ‘AudioCollection’ data to the ‘CreateFilesSharePoint’ flow.

The ‘Clear’ button clears data from the ‘AudioCollection’.

  1. The camera control is used to click photos in the canvas app. Multiple pictures can be captured and viewed on the Gallery control.

Camera Gallery Control

Here, we have a camera control ‘Camera1’ to capture a picture and store it into a local collection ‘ImageCollection’. The Expression used on the ‘OnSelect’ property of the Camera control is:

Collect(ImageCollection,{Name:Concatenate("Image",Text(Today()),"-",Text(CountRows(ImageCollection)),".jpg"),Url:Camera1.Photo})

Explanation: Collect function updates the ‘ImageCollection’ collection with the unique file name and the URL of the photo taken from the camera control. The name of the file is a concatenated string of ‘Image-Today’s Date-Index of the photo in the gallery control.jpg’.

The ‘Submit’ button triggers the Flow and creates all the images as separate files on the SharePoint document library. The Expression used on this button is:

CreateFilesSharePoint.Run(JSON(ImageCollection,JSONFormat.IncludeBinaryData))

Explanation: Here, the JSON function converts the image file URL to binary data and sends the ‘ImageCollection’ data to the ‘CreateFilesSharePoint’ flow.

The ‘Clear’ button clears data from the ‘ImageCollection’.

MS Flow

Coming to the ‘CreateFilesSharePoint’ flow: This flow is triggered by the button controls on the different screens in the Canvas App.

Action 1: Initialise a variable -> accommodates the input coming from the canvas app.

Action 2: Initialise a variable (2) -> To get the string to send a response back to the canvas app.

Action 3: Parse JSON: Get the dynamic data by parsing the data received from the canvas app according to the schema where we have an array that contains objects with the attributes: ‘Name – Filename’, ‘URL – Filecontent’.

Flow 1

Action 4: Apply to Each control: Iterate over each file item from the body output of the Parse JSON function.

Action 5: Condition control within the Apply to each Control: Split the file name and check if the extension is a PDF file.

If No,

Action 6: Create File 2 in SharePoint: to create a file for the image/ audio type in the defined library. If Yes,

Action 7: Parse JSON 2: The data content passed from the PowerApps as the URL key is now being parsed as individual elements to create an HTML table and then finally create a PDF file out of it.

Action 8: Create HTML Table: Creates an HTML table with the column names as headers and gets the data from the Parse JSON 2 action.

HTML Table from Parson JSON

Action 9: Create File in OneDrive: To create a temporary HTML file from the HTML table generated in the previous step and store it in the ‘Hello’ folder on the OneDrive.

Action 10: Convert File in OneDrive: To convert the previously created HTML file to a PDF document.

Action 11: Create File 2 in SharePoint: To create the PDF file from the converted file from the previous action. The file is stored in the specified document library on SharePoint.

Action 12: Delete File from OneDrive: To delete the temporary HTML file that was created in Action 9.

Action 13: Get file Properties SharePoint: To get the URL of the PDF file created in SharePoint.

Action 14: Set Variable: Set the URL to the file as a string value.

Create and Transform Files

Action 15: Respond to PowerApps: Send the URL of the file created on SharePoint to PowerApps. (Outside of the apply to each control)

Respond to PowerApps

In this blog, we have seen how we can use the JSON function to pass data from PowerApps to Flow. We were able to successfully send binary data (image files, audio recordings) and a gallery data table. We can also send collections, data directly from data sources with appropriate filters, etc. The attributes that can be sent via the JSON function does not support sending attachments, nested arrays/objects.

I hope you found this interesting and this helped you. Thank you for reading!

Most of the time, deploying database scripts is tricky, time-consuming, and error-prone — specifically when a script fails due to mismatched schema, missing prerequisite data, dependencies, or any other factor.

Thankfully, different tools can automate and simplify the process…one of which is SQL Change Automation from Red Gate.

What is SQL Change Automation?

Put simply, SQL Change Automation (SCA) allows you to develop and deploy changes to a SQL Server database. It automates validation and testing, which can be performed on build and release management systems such as Azure DevOps, TeamCity, Octopus Deploy, Bamboo, and Jenkins.

Installation & Required Tools

  • Download SQL Toolbelt
  • Run the .exe and select only “SQL Change Automation 3.0” and “SQL Change Automation PowerShell 3.1
  • Visual Studio 2015/2017
  • Azure DevOps

Automated Deployment with SCA & ADO CI/CD

Create an SCA Project

  1. First, create a new SQL Change Automation project by clicking the Create Project button from SQL Change Automation menu under Tools.
  2. Select the Development (source) and Deployment Target Databases. SCA will detect the differences and create a baseline script.
    Note: This baseline script is created from the selected Target database
  3. The next step is to identify the source database changes that need to be scripted and deployed to the target database. For this, click on the Refresh button in the SQL Change Automation tab. It will list the database objects which are different from the target database.
  4. After selecting the required objects, click the Import and Generate Scripts button. It will automatically generate all the required scripts.
  5. Now go ahead and build the solution!

Setup GIT Repository in DevOps

  1. Create a new repository and upload the SCA project.
  2. Create a new Feature branch and commit the changes to this repo. Raise a pull request and assign the reviewer.
  3. Once the pull request is approved and marked complete, the changes will automatically merge to the master branch.

Setup CI/CD Pipeline in DevOps

  1. Create a new Build pipeline, select the repo.
  2. Add a new build task Redgate SQL Change Automation: Build and configure it.
    Note: This extension must be first installed into your Azure DevOps organization before using it as a task in the build flow
  3. Save the pipeline and queue a new build.
  4. Next setup a Release pipeline, create a new Release, and select the Build artifact as the input.
  5. Add a new Release task Redgate SQL Change Automation: Release and specify the configuration details like operation type, build package path, target SQL instance, database name, and credentials.
    Note: This extension must be first installed into your Azure DevOps organization before using it as a task in the release flow
  6. Save the Release pipeline and trigger a new Release.
  7. Once the Release is successful, connect to the target database and verify if the new database objects are deployed.
    Note: The target database server can be in Azure or on-premises.

Here a short video on how to configure SCA and integrate with Azure DevOps CI/CD pipeline.

Automated Rollback Using SCA & ADO CI/CD

Rolling back a database deployment is a complicated task. The code on other fronts is rather easy to rollback — just deploy the previous version of the code package and done. But databases are not as flexible. Imagine there’s an error in a script and all usernames get deleted. There isn’t a good way to roll that back! Sure, a backup could be restored. But when was that backup taken? Have any new users been in the system since that backup? What data will be lost if the backup is restored?

The process needs to be thought through right before the deployments to ensure an effective rollback process. The steps below walk through a simple example of how a rollback can be applied in an automated manner using SCA with CI/CD.

  1. First, create a new folder in your SCA solution and name it Rollback. Add your rollback scripts to this folder.
  2. While creating migration scripts (i.e., UP script), also create Down scripts. To create rollback scripts, right-click the database object and select View Revert Script option.
  3. Save the script in a new file and save it under the Rollback folder.
    Note: This rollback script will not be executed as part of the deployment.
  4. If there are any issues post-deployment, copy this rollback script to the Migration folder. Insert the Metadata and save the script.
  5. Commit the script to GIT and complete the pull request.
  6. Raise a new Build and let Release to complete.
  7. Once successful, verify the changes.

Here is a short video on how to perform rollback with SCA generated scripts.

Key Terms:

  • Baseline: The schema of the Deployment Target will be read to create a baseline schema.
  • Shadow Database: SCA keeps the shadow database consistent with all the migration scripts currently in the project as needed, and uses it to verify scripts to detect problems in your code.
  • (Table) [__MigrationLog] keeps track of the migrations and Programmable Objects/additional scripts that have been executed against your database. (Additional executions of Programmable Objects/additional scripts will result in new rows being inserted.)
  • (View) [__MigrationLogCurrent] lists the latest version of each migration/Programmable Object/additional script to have been executed against the database.

Following up my last post on Azure Web App for Containers, in part two we’ll go through the various types of storage options available with Azure Web App for Containers, along with the scenarios where they fit best.

As of writing this post, there are 3 storage options:

  1. Stateless
  2. Storage on App Service Plan
  3. Storage using a Storage Account File Share

Stateless

As you all know, containers without any volume mounts are completely stateless — i.e., the container will not persist data once it is shut down. When you create an instance of Azure Web App for Containers, this is the default option. Reboots on Azure App Service platform can happen from time to time for maintenance. The only files which are persisted across reboots in this mode are the logs which are located under the /home/LogFiles folder.

This scenario is best applicable for APIs where you don’t have to store any data on the server itself. A typical use case would be a 3-tier architecture application where each layer resides as a separate resource in Azure.

Storage on App Service Plan

Using this option allows you to store data on the App Service Plan. In order to enable this, you would have to create an App Setting (WEBSITES_ENABLE_APP_SERVICE_STORAGE=true). In this mode, what happens is that the /home directory is persisted across reboots. Azure does this by mounting storage behind the scenes at this path and then persisting it. As this storage is maintained by Azure, it ensures that it is performant. From what I have seen, in case there is an issue (Performance or Availability) with the storage, Azure will try to switch to the secondary copy and while this is being done, this storage becomes read-only.

When site level backups are enabled on such an instance, the contents of the /home directory are also backed up. The downside to this is that this storage is only visible to your Web app and is not accessible to the outside world.

This option is best for scenarios where storage is required on the server with a minimal overhead of maintenance like when hosting Drupal & WordPress.

Storage using a Storage Account File Share

As of writing this, this option is still in Preview but allows you to connect external services to the Storage Account. This can be set up by going to the “Path Mappings” section of the Azure App Configuration –

Path Mappings Storage Account

This type of mapping supports both Blob containers and Azure Storage File Shares. Although the functionality provided by both is pretty much the same, the technologies behind the scenes which support these are different. For Blob Containers, Blobfuse is used to handle the translation and mapping of file paths to remote blob paths whereas mounting Azure Storage File Shares uses SMB Protocol and uses the CIFS mounts on Linux.

I would suggest you use the Azure Storage File Shares and not the Blob Containers for these storage mounts as Blobfuse is not POSIX (Portable Operating System Interface) compliant. For best performance and stability, use the Azure Storage File Share mount.

While using this option, do keep in mind that the site backups do not back up the mounts. You would have to manage the backups of the mounts on your own.

This option is best suited for scenarios where you need more control over the storage and the ability to connect other devices to the same storage as the Web App for Container.

Stay tuned for the next part!