Microsoft HQ in RemondAfter much anticipation, the US Department of Defense (DoD) has awarded the $10 billion Joint Enterprise Defense Infrastructure (JEDI) contract for cloud computing services to Microsoft over Amazon. This effort is crucial to the Pentagon’s efforts to modernize core technology and improve networking capabilities, and the decision on which cloud provider was the best fit was not something taken lightly.

Current military operations run on software systems and hardware from the 80s and 90s, and the DoD has been dedicated to moving forward with connecting systems, streamlining operations, and enabling emerging technologies through cloud adoption.

Microsoft has always been heavy to invest back into their products, the leading reason we went all-in on our partnership, strengthening our capabilities and participating in numerous Microsoft programs since the inception of the partner program in 1994.

In our experience, one of the many differentiators for Microsoft Azure is its global networking capabilities. Azure’s global footprint is so vast that it includes 100K+ miles of fiber and subsea cables, and 130 edge locations connecting over 50 regions worldwide. That’s more regions across the world than AWS and Google combined. As networking is a vital capability for the DoD, they’re investing heavily in connecting their bases and improving networking speeds, data sharing, and operational efficiencies, all without sparing security and compliance.

Pioneering Cloud Adoption in the DoD

We are fortunate enough to have been on the front lines of Azure from the very beginning. AIS has been working with Azure since it was started in pre-release under the code name Red Dog in 2008. We have been a leading partner in helping organizations adopt Azure since it officially came to market in 2010, with the privilege of experience in numerous large, complex projects across highly-regulated commercial and federal enterprises ever since.

When Azure Government came along for pre-release in the summer of 2014, AIS was among the few partners invited to participate and led all partners with the most client consumption. As the first partner to successfully support Azure Gov IL5 DISA Cloud Access Point (CAP) Connectivity and ATO for the DoD, we’ve taken our experience and developed a reliable framework to help federal clients connect to the DISA CAP and expedite the Authority to Operate (ATO) process.

We have led important early adoption projects to show the path forward with Azure Government in DoD, including the US Air Force, US Army EITaaS, Army Futures Command, and Office Under Secretary of Defense for Policy. Our experiences have allowed us to show proven success moving DoD customers IMPACT Level 2, 4, 5, and (soon) 6 workloads to the cloud quickly and thoroughly with AIS’ DoD Cloud Adoption Framework.

To enable faster cloud adoption and native cloud development, AIS pioneered DevSecOps and built Azure Blueprints to help automate achieving federal regulation compliance and ATO. We were also the first to achieve the Trusted Internet Connections (TIC) and DoD Cyber Security Service Provider (CSSP), among others.

AIS continues to spearhead the development of processes, best practices, and standards across cloud adoption, modernization, and data & AI. It’s an exceptionally exciting time to be a Microsoft partner, and we are fortunate enough to be at the tip of the spear alongside the Microsoft product teams and enterprises leading the charge in cloud transformation.

Join Our Growing Team

We will continue to train, mentor, and support passionate cloud-hungry developers and engineers to help us face this massive opportunity and further the mission of the DoD.

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

Rehosting Considerations

What is Rehosting?

Rehosting is an approach to migrating business applications hosted on-premises in data center environments to the cloud by moving the application “as-is,” with little to no changes to the functionality. A common rehosting scenario is the migration of applications that were initially developed for an on-premises environment to take advantage of cloud computing benefits. These benefits may include increased availability, faster networking speeds, reduced technical debt, and a pay-per-use cost structure.

In our experience, rehosting is well suited for organizations under time-sensitive data center evacuation mandates, facing pressure from leadership to migrate, running COTS software that doesn’t support modernization, or those with business-critical applications on end-of-support technologies. These organizations often opt for a rehost then transform strategy, as reviewed in the following blog, Cloud Transformation Can Wait… Get Me to the Cloud Now!

Below we outline important considerations, benefits, and challenges associated with a rehost strategy, based on our real-world experiences moving both custom and packaged commercial on-premises applications to the cloud. We’ll also discuss steps to take when your migration initiative goes beyond rehosting, requiring the assessment of alternative migration approaches, such as re-platforming and refactoring.

Critical Considerations

When moving on-premises business-critical applications to the cloud, there are critical considerations that span technical, operational, and business domains. Below are three key components not to be overlooked when defining your cloud migration strategy:

  • Establishing a shared vision: Ensuring you have set goals and an executive sponsor.
  • Understanding your why: Why are you migrating to the cloud in the first place?
  • Defined business impact: What impact do you expect from your migration efforts and are your goals realistic based on the chosen approach?

Establishing a Shared Vision

Establish a Shared Vision with Stakeholders

The landscape of on-premises systems is often governed by many stakeholders, both business and IT, with competing goals, risk profiles, and expected outcomes from a migration effort. Having a clear vision for your rehost initiative with key roles and responsibilities defined is critical to the timeliness, investment, and overall success of your project. Finding an executive sponsor to unite the various groups, make decisions, and define the business goals and expected outcomes is vital in risk management.

As part of creating this shared vision, the executive sponsor needs to ensure:

  • Goal Alignment: Having a shared vision among various business and IT stakeholders set direction and expectations for the project. A shared vision allows all parties, including vendors and internal resources, to understand the goal(s) and the role they’ll play for the project.
  • Sufficient Budgeting and Resource Allocation: Appropriate budget and resources must be allocated for executing tasks related to this partnership, before the start of the migration effort to ensure timely project completion.
  • Proper Documentation of Existing Systems: Critical information about on-premises systems and operations is often either insufficient or missing entirely. System documentation is mandatory to migrate systems and uphold their intended purpose.
  • Product Ownership: On-premises business application suites are often acquired or internally developed. Original vendors may be out of business, so products are no longer viable. Conversely, the custom product may no longer be supported or understood due to missing source code. An owner needs to be designated to determine the future of the product.
  • Organizational Change Management: Without user adoption, your cloud migration will fail. Change management efforts enable cloud technology adoption and require proper planning and execution from the start.

The considerations outlined above should be discussed up front, and partnerships among stakeholder groups must be established to accomplish the intended goal(s) of migration under executive sponsor leadership.

Understand Your Why

Understand Why You're Moving

You’ve heard the stories about failed cloud migrations. Those stories often start with misaligned expectations and a rushed execution, which are among the top reasons cloud migrations result in a backslide to on-premises environments. Migrating to the cloud isn’t a silver bullet – not every organization will experience cost savings or even immediate functionality improvements from a rehosting project but there are many opportunities for cost avoidance and optimization of spend.

As an IT director or manager, it’s critical to ensure executive-level sponsors understand how different migration approaches align to anticipated outcomes. There’s often pressure from leadership to migrate to the cloud, and understandably with countless cloud benefits and the many challenges associated with aging on-premises solutions. However, understanding and communicating what’s realistic for your organization and how different approaches will address various business goals is crucial.

Data Center Evacuations & Unsupported Technology

Organizations migrating based on a mandated data center evacuation or the security and compliance risks associated with unsupported or end-of-support technology often look to a rehost strategy as a first step. This helps accomplish the business goal of reducing technical debt or remediating compliance concerns quickly.

Reaping the Benefits of Cloud-Native Solutions

There are many other reasons organizations look to the cloud, such as staying competitive, increasing time to value, or the ability to innovate. To fully realize the cloud outcomes that motivate these decisions – including greater flexibility, scalability, data security, built-in disaster recovery options, and improved collaboration – additional planning and refactoring of on-premises applications are often required. In these cases, sometimes we see a rehost as the first stage (as leadership wants to see quick results or has made a public commitment to migrate to the cloud), followed by more advanced modernization efforts.

To get to the root of goals and expectations, consider the following questions as you build your roadmap:

  1. What are your business objectives for cloud adoption, and how will they help further the company vision?
  2. Is there a set timeline to complete the cloud migration effort?
  3. What internal and external resources are available to support a cloud migration?
  4. How many applications are in your portfolio, and do you plan to migrate everything, or are you considering a hybrid model approach?
  5. What are the technical requirements and interdependencies of your applications? How will you assess cloud readiness?
  6. What are the necessary governance, security, and compliance considerations?
  7. Who will be responsible for moving workloads to the new cloud platform? Who will perform the migration, and manage the workloads? Will you be doing it by yourself, or will it be a shared initiative?
  8. How do you intend to use automation to reduce manual efforts and streamline provisioning and governance?

As you answer the questions above, you may find that a rehost effort is sufficient. Likewise, you may choose to explore a lead horse application approach as part of your migration strategy to better understand the value derived from various modernization tactics.

Uncovered Benefits of the Cloud

Uncover Additional Cloud Benefits

If your organization is interested in exploring cloud benefits that go beyond what a rehost effort can provide, migration options that are more involved than rehosting may be worth your consideration. Organizations looking to modernize through re-platforming or refactoring may be motivated by cloud benefits such as:

  • Faster time to market, product release cycles, and/or pace of innovation
  • Enriched customer and end-user experiences
  • Improved employee technology, collaboration, and processes
  • Better reliability and networking speeds
  • Reduced cost of labor and/or maintenance
  • Ability to leverage emerging technology
  • Built-in disaster recovery options
  • Flexibility and scalability
  • Data security
  • Cost allocation for budgeting and showback/chargeback
  • Move from Capex to Opex (or realize Capex by buying resource commitments)

If you are facing tight timelines to migrate, a rehost effort can get you one step closer to realizing the above benefits. Through an initial migration, you can look to a proof of concept to gain a further understanding of the business impact various approaches have to offer while incrementally progressing cloud transformation.

TO LEARN MORE ABOUT THE DIFFERENT APPROACHES TO MIGRATION AND MODERNIZATION, DOWNLOAD OUR FREE WHITEPAPER.

Challenges

While rehosting is a faster, less resource-intensive migration approach and a great first step into the cloud, it comes with challenges and limitations.

The primary limitation of migrating certain on-premises applications to the cloud is the application’s inherent cloud compatibility. Specific applications have internal and external dependencies which limit their ability to take advantage of more advanced cloud benefits once rehosted.

While rehosting allows you to modernize the application environment, resulting in outcomes such as reduced Data Center costs, other cloud benefits aren’t fully realized. Outcomes such as elasticity and the ability to take advantage of cloud-native features are not available with a strictly rehost strategy.

While more cost-effective than on-premises hosting, sometimes it can be more costly to run applications in the cloud when rehosting, versus re-platforming or refactoring without a FinOps strategy to master the unit economics of cloud for competitive advantage. To show fast progress, rehosting is often a great transitional stage for working towards a cost-effective cloud solution, especially for organizations on a tight timeline. During this stage, managing cloud costs and realizing cloud value with a FinOps practice is key.

Feeling Stuck?

Not Sure Where to Start?

If you’re stuck in analysis paralysis, work with a consultant that’s been through various migration projects, from start to finish, and understands the common challenges and complexities of different approaches.

Whether you’re considering Azure, Office 365, Power Platform, or another cloud platform, AIS has a range of Adoption Frameworks and Assessments that can help you understand your options. With our help, create a shared vision, and align business goals to the appropriate migration approaches.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO CLOUD MIGRATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.

SPFx Modern Web Development

SharePoint has been a widely adopted and popular content management system for many large organizations over the past two decades. From SharePoint Portal Server in 2001 to SharePoint Server 2019 and SharePoint Online, the ability to customize the user experience (UX) has evolved dramatically, keeping pace with the evolution of modern web design. SharePoint Framework (SPFx) is a page and web part model that provides full support of client-side SharePoint development with support for open sources. SPFx works in SharePoint Online, and with on-premises SharePoint 2016, and SharePoint 2019. SPFx works both in modern and classic pages. SPFx Web Parts and Extensions are the latest powerful tool we can use to deliver great UX!

Advantages of using SharePoint Framework

1. It Can’t Harm the Farm

Earlier SharePoint (SP) customization code executed on the server from compiled, server-side code is written in a language such as C#. Historically, we created web parts as full trust C# assemblies that were installed on the SharePoint servers and had access to disrupt SharePoint for all users. Because it ran with far greater permissions on the server, it could adversely impact or even crash the entire farm. Microsoft tried to solve this problem in SP 2010 by implementing Sandbox solutions, followed by the App Model, now known as Add-In model.

SPFx development is based on JavaScript running in a browser, making REST API calls to the SharePoint and Office 365 back-end workloads and does not touch the internals of SharePoint.

The SharePoint Framework is a safer, lower-risk model for SharePoint development.

2. Modern Development Tools

Building SPFx elements using JavaScript and its wealth of libraries, the UX and UI can be shaped as beautifully as any modern website. The JavaScript is embedded directly to the page, and the controls are rendered in the normal page DOM.

SharePoint Framework development is JavaScript framework-agnostic. The toolchain is based on common open-source client development tools such as npm, TypeScript, Yeoman, webpack, and gulp. It supports open-source JavaScript libraries such as Node.js, React.js, Angular.js, Handlebars, Knockout, and more. These provide a lightweight and rapid user experience.

3. Mobile-First Design

“Mobile first”, as the name suggests, means that we start the product design from the mobile end, which has more restrictions to make the content usable in the small space of a phone. Next, we can expand those features to a more luxurious space to create a tablet or desktop version.

Because SharePoint Framework customizations run in the context of the current page (and not in an IFRAME), they are responsive, lightweight, accessible, and mobile-friendly. Mobile support is built-in from the start. Content reflows across device sizes and pages are fast and fluid.

4. Simplified Deployment

There is some work to do at the beginning of a new project to set up the SPFx structure to support reading from a remote host. An App Catalog must be created, as well as generating and uploading a manifest file. If the hosted content is connected with a CDN (Content Delivery Network), that will also require setup. However, once those structural pieces are in place, deployment is simplified to updating files on the host location. It does not require traditional code deployments of server-side code, with its attendant restrictions and security review lead time.

5. Easier Integration of External Data Sources

With SPFx, calls to data from external sources may be easier since it’s web content hosted outside of SharePoint.

SPFx Constraints and Disadvantages

The SharePoint Framework is only available in SharePoint Online,  on-premises SharePoint 2016, and SharePoint 2019 at the time of this blog. SPFx cannot be added to earlier versions of SharePoint such as SharePoint 2013 and 2010.

SharePoint Framework Extensions cannot be used in on-premises SharePoint 2016 but only in SharePoint 2019 and SharePoint Online.

SPFx, like any other client-side implementation, runs in the context of the logged-in user. The permissions cannot be elevated to impersonate as an admin user like in farm solutions, CSOM (client-side object model) context, or in SharePoint Add-ins and Office 365 web applications. The SharePoint application functionality is limited to the current user’s permission level, and customization is based on that as well. To overcome this constraint, a hybrid solution implementation with SPFx to communicate with Application Programming Interfaces (APIs). APIs would be registered as SharePoint add-in, that uses the app-only context to communicate with SharePoint. For this communication between SPFx and API to work, the API would need to support CORS (Cross-Origin Resource Sharing) as the communication would be through cross-domain client-side calls.

SPFx is also not for long-running operations as it is entirely client-side implementation. The web request cannot wait longer until it gets the response from the long-running web operation. Hence for those processes, the hybrid approach with long operations can be implemented in Azure web job/function, and SPFx can get the update from those via webhook.

Developers coming from server-side will have a learning curve with entirely client-side development. But TypeScript is there for the rescue.

SPFx Comparison to Other Technologies and Models

SharePoint lists come in handy for many organizations when entering data, but customers always ask for the ability to display the data in some reporting format, such as a dashboard. Below we compare the different ways we can accomplish this and why SPFx is a good fit:

  • Classic Web Part Pages: If we do not want to use the SharePoint Framework, SharePoint 2019 still supports the classic web part pages. You can add content editor web parts and deploy any custom JavaScript/jQuery code. However, with this approach, uploading the Js files in the SP library and manually adding pages in a library become cumbersome. We may end up writing custom JSOM (JavaScript object model) code to make the deployment easier. Microsoft does not recommend this approach, and there is the possibility that this will no longer be supported in the future. Also, with this approach, if you want to render any custom tables, you need to write custom code or use a third-party table. Using SharePoint Framework, we can easily use Office UI Fabric React components like Details list.
  • Custom App: We can design custom applications to deploy in the cloud, which can read the data from SharePoint. The challenge is that each customer environment is different. It’s not always easy to connect to SharePoint from the cloud in a production environment, especially with CAC (Common Access Card) authenticated sites.
  • PowerApps/LogicApps: With newer technologies such as PowerApps, Logic Apps, and Flow, we can design custom SharePoint Forms and business logic and connect to SharePoint using the SharePoint connector. In a production environment, it is not easy to get connection approved and to connect with on-premises data. PowerApps and Flow require the purchase of licenses.

Using SPFx, we can quickly design the dashboards using Office UI Fabric components. For deployment, we do not need to write any custom utility code, SharePoint framework package can create the lists and libraries as well.

Wrapping Up

We hope this blog provided an SPFx overview and its great functionalities. Please look forward to our next blog post (Part II) in developing and deploying custom SPFx Web Parts, Extensions, and connecting to API’s/Azure in SharePoint Online and SharePoint 2019!

Additional Links to get started in SPFx

Blazor is coming! In fact, it’s coming sooner than later. Check out ASP.NET’s blog post from April 18, 2019 announcing the official preview.

What is Blazor?

by Vivek Gunnala 

Blazor is a new interactive .Net Web framework, which is part of the open-source .Net platform. Blazor uses C#, HTML, CSS and Razor components instead of JavaScript. It’s built on open web standards without the need for any plugins or code transpilation, and it works on all modern web browsers, hence called “.Net in the browser”, the C# code is directly run on the browser using WebAssembly. Both client-side code and server-side code is developed in C#, which allows you to reuse code and libraries between both sides, such as validations, models, etc.

Apps built in Blazor can use existing .Net libraries by leveraging .Net Standard, allowing the same code to be used across platforms. Since it is an experimental project, Blazor is evolving rapidly with over 60,000 contributors.

About WebAssembly

At a high-level, WebAssembly is explained on the official site as, “a binary instruction format a stack-based virtual machine. It is designed as a portable target for compilation of high-level languages, enabling deployment on the web for client and server applications.”

Should I Use Blazor For My Next Project?

by Ash Tewari 

Blazor’s development status has been promoted from an “Experimental” project to a committed product. This is great news. Blazor is available now as an official preview. Let’s review the factors you should consider when making decisions about adopting Client-Side Blazor for your next production project.

Mono.wasm (The .NET runtime compiled into WebAssembly executing your .NET assemblies in the browser) does not interact with the DOM directly. It goes through JS Interop, which is expensive. The areas where .NET Code will get the most net benefit is in the Model and Business Logic, not the DOM manipulation. If your application is very chatty with the DOM, you might need to carefully assess whether you are getting the expected performance boost from WebAssembly execution of your .NET assemblies. [https://webassemblycode.com/webassembly-cant-access-dom/]

Currently, only the mono runtime is compiled to WebAssembly. Your .NET code is executed as-is. This means that your .NET code is essentially going through two interpreters and it has a noticeable performance impact. There is work being done to compile .NET assemblies to wasm. That and other related improvements in linking and compiling) is expected to improve the performance. The fact that Microsoft has decided to commit Blazor as a product indicates that there is confidence that these performance improvements are likely to become a reality.
[https://www.mono-project.com/news/2018/01/16/mono-static-webassembly-compilation/, https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md]

In the client-side hosting model, your code is still running in the browser sandbox. So, you don’t have any access to FileSystem and other OS libraries. This limitation applies to javascript as well. In fact, WebAssembly is executed by the Javascript runtime. Yes, the same runtime which is executing the javascript in your web application.
[https://medium.com/coinmonks/webassembly-whats-the-big-deal-662396ff1cd6]

Well, if WebAssembly is executed by the same Javascript runtime, then where are the performance gains everyone is touting about coming from? The answer is that the performance gains come from skipping the parsing steps and/or optimizing compilation steps. The WebAssembly is decoded and JITed instead of parsed and compiled before the JIT step. However, there is work still ongoing to make .NET IL interpretation reach the performance levels required to fulfill the promises
[https://hacks.mozilla.org/2017/02/what-makes-webassembly-fast/]

Remember that your Blazor code executes in the UI thread of the browser, which can create a bottleneck if your application is CPU bound. Ironically, the CPU/computationally intensive applications are also one of the most compelling use-cases for Blazor. You may need to look into running Blazor components in the Web Worker. We will cover this in a separate blog post dedicated to this technique.

Server-Side Blazor

by Sean McGettrick 

Server-side Blazor, previously referred to as Razor Components, allows developers the same freedom to create UI components using C# instead of Javascript that client-side Blazor does. The primary difference being that the code is hosted on the server instead of the browser. Blazor components and application logic written to run client-side can also be used server-side.

Razor Components support all the functionality a front-end developer would expect in a modern library including:

  • Parameterization
  • Event handling
  • 2-way data binding
  • Routing
  • Dependency injection
  • Layouts
  • Templating
  • CSS cascading

Razor Components can be nested and reused, similar to React.

Differences from Client-Side

With server-side Blazor, all components are hosted and served from an ASP.NET Core server instead of being run in the browser via WASM. Communication between client and server are handled via SignalR.

Further differences between client and server-side Blazor will be outlined in the next two sections.

Advantages

Server-side Blazor offers a number of advantages over its client-side counterpart. These include:

  • No WASM dependencies. Older desktop browsers and some current mobile browsers lack support for WASM. Since server-side Blazor only requires the browser to be able to support Javascript it can run on more platforms.
  • Building on the last point, since the components and application logic sit server-side, the application is not restricted to the capabilities of the browser.
  • Developing the application on an entirely server-based platform allows you access to more mature .NET runtime and tooling support.
  • Razor components have access to any .NET Core compatible API.
  • Application load times in the browser are faster due to a smaller footprint. Only the SignalR Javascript code required to run the application is downloaded to the client.

Disadvantages

There are, however, some disadvantages to using server-side Blazor:

  • There is higher application latency due to user interactions requiring a network round-trip between the browser and the server.
  • Since the application is entirely hosted on the server, there is no offline support. If the server goes down, the application will not function which breaks one of the core tenets of building a Progressive Web Application (“Connectivity independent: Service workers allow work offline, or on low-quality networks”).
  • With the server being responsible for maintaining client state and connections, this can create difficulty in scaling the application since the server is doing all the work.
  • The application must be hosted on an ASP.NET Core server.

Server-Side Blazor Code Re-Use, Razor Pages to Blazor using an MVVM approach

by Adam Vincent 

What is MVVM?

In a nutshell, MVVM is a design pattern derived from the Model-View-Presenter (MVP) pattern. The Model-View-Controller (MVC) pattern is also derived from MVP, but where MVC is suited to sit on top of a stateless HTTP protocol, MVVM is suited for user interface (UI) platforms with state and two-way data binding. MVVM is commonly implemented in Desktop (WPF / UWP), Web (Silverlight), and Mobile (Xamarin.Forms) applications. Like the other frameworks, Blazor acts much like a Single Page Application (SPA) that has two-way data binding and can benefit from the MVVM pattern. So whether you have existing MVVM code in the form of a WPF or mobile application, or are starting green with new code, you can leverage MVVM to re-use your existing code in Blazor, or share your code with other platforms.

You can find more information on MVVM on Wikipedia.

Example Presentation Layer

BindableBase 

At the heart of MVVM is the INotifyPropertyChanged interface which notifies clients that a property has changed. It is through this interface that converts a user interaction into your code being called. Usually, all ViewModels, and some Models will implement INotifyPropertyChanged therefore, it is common to either use a library (Prism, MVVM Light, Caliburn) or to create your own base class. What follows is a minimal implementation of INotifyPropertyChanged.

public abstract class BindableBase : INotifyPropertyChanged
{
    protected bool SetField<T>(ref T field, T value, [CallerMemberName] string propertyName = null)
    {
        if (EqualityComparer<T>.Default.Equals(field, value)) return false;
        field = value;
        OnPropertyChanged(propertyName);
        return true;
    }
 
    public event PropertyChangedEventHandler PropertyChanged;
 
    protected void OnPropertyChanged([CallerMemberName] string propertyName = null)
    {
        PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
    }
}

In this simplified model class, which derives from BindableBase, we have a CustomerModel with a single property FirstName. In this context we would probably have a customer filling out an input within a form on a website where they must fill in their first name. This input would be bound to an instance of CustomerModel on the ViewModel. While the customer is filling out the form, since we are in a two-way data binding scenario, each time the customer enters or removes a character from the form’s input box, SetField() is called and will cause the PropertyChanged event to fire.

public class NewCustomerModel : BindableBase
{
    private string firstName;
    
    public string FirstName
    {
        get => firstName;
        set
        {
            SetField(ref firstName, value);
        }
    }
}

Learn More: If you need to know more about INotifyPropertyChanged the Microsoft Docs cover this topic very well.

Model

With INotifyPropertyChanged out of the way, here is the entire presentation model.

public class NewCustomerModel : BindableBase
{
    [Display(Name = "Customer Number")]
    public string CustomerNumber { get; set; }
 
    [Display(Name = "Full Name")]
    public string FullName => $"{FirstName} {LastName}";
 
    private string firstName;
    [Required]
    [Display(Name = "First Name")]
    public string FirstName
    {
        get => firstName;
        set
        {
            SetField(ref firstName, value);
            OnPropertyChanged(nameof(FullName));
        }
    }
 
    private string lastName;
    [Required]
    [Display(Name = "Last Name")]
    public string LastName
    {
        get => lastName;
        set
        {
            SetField(ref lastName, value);
            OnPropertyChanged(nameof(FullName));
        }
    }
 
    [Display(Name = "Address")]
    public string Address => $"{Street}, {City}, {State} {PostalCode}";
 
    private string street;
 
    [Required]
    [Display(Name = "Street Address")]
    public string Street
    {
        get => street;
        set
        {
            SetField(ref street, value);
            OnPropertyChanged(nameof(Address));
        }
    }
    private string city;
 
    [Required]
    [Display(Name = "City")]
    public string City
    {
        get => city;
        set
        {
            SetField(ref city, value);
            OnPropertyChanged(nameof(Address));
        }
    }
    private string state;
 
    [Required]
    [Display(Name = "State")]
    public string State
    {
        get => state;
        set
        {
            SetField(ref state, value);
            OnPropertyChanged(nameof(Address));
        }
    }
    private string postalCode;
 
    [Required]
    [Display(Name = "Zip Code")]
    public string PostalCode
    {
        get => postalCode;
        set
        {
            SetField(ref postalCode, value);
            OnPropertyChanged(nameof(Address));
        }
    }
}

There are a few things to point out in this presentation model. First, please note the use of the Data Annotation attributes such as [Required]. You can decorate your properties to provide rich form validation feedback to your users. When the customer is filling out a form and misses a required field it will not pass the model validation. This will prevent the form from being submitted as well as provide an error message if one is configured. We will cover this more in the View section

The next thing I wanted to point out is I’ve covered SetField() in the INotifyPropertyChanged section, but there is an additional bit of complexity.

[Display(Name = "Full Name")]
public string FullName => $"{FirstName} {LastName}";

Note that the FullName property is a { get; }-only concatenation of the customer’s first and last name. Since we are forcing the customer to fill out first and last name in a separate form field, changing either the first or last name causes the FullName to change. We want the ViewModel to be informed of any changes to FullName.

private string firstName;
[Required]
[Display(Name = "First Name")]
public string FirstName
{
    get => firstName;
    set
    {
        SetField(ref firstName, value);
        OnPropertyChanged(nameof(FullName));
    }
}

After the SetField() is invoked in the base class, there is an additional call to OnPropertyChanged(), which lets the ViewModel know that in addition to FirstName changing, FullName has also changed.

Example ViewModel Interface

The example ViewModel below will expand on the model above. We’ll be using a simplified user story of “Creating a New Customer.”

Blazor supports .NET Core’s dependency injection out of the box, which makes injecting a ViewModel very simple. In the following ViewModel interface, we’ll need our concrete class to have an instance of NewCustomer as well as a method which knows how to create a new customer.

public interface ICustomerCreateViewModel
{
    NewCustomerModel NewCustomer { get; set; }
    void Create();
}

And the concrete implementation of ICustomerCreateViewModel:

public class CustomerCreateViewModel : ICustomerCreateViewModel
{
    private readonly ICustomerService _customerService;
 
    public CustomerCreateViewModel(ICustomerService customerService)
    {
        _customerService = customerService;
    }
 
    public NewCustomerModel NewCustomer { get; set; } = new NewCustomerModel();
 
    public void Create()
    {
        //map presentation model to the data layer entity
        var customer = new NewCustomer()
        {
            CustomerNumber = Guid.NewGuid().ToString().Split('-')[0],
            FullName = $"{newCustomer.FirstName} {NewCustomer.LastName}",
            Address = $"{newCustomer.Address}, {NewCustomer.City}, {newCustomer.State} {NewCustomer.PostalCode}"
        };
 
        //create
        _customerService.AddNewCustomer(customer);
    }
}

ViewModel Deep-Dive

In the constructor, we’re getting an instance of our ICustomerService which knows how to create new customers when provided the data layer entity called NewCustomer.

I need to point out that NewCustomer and NewCustomerModel serve two different purposes. NewCustomer, a simple class object, is the data entity used to persist the item. NewCustomerModel is the presentation model. In this example, we save the customer’s full name as a single column in a database (and is a single property in NewCustomer), but on the form backed by the NewCustomerModel presentation model, we want the customer to fill out multiple properties, ‘First Name’ and ‘Last Name’.

In the ViewModel, the Create() method shows how a NewCustomerModel is mapped to a NewCustomer. There are some tools that are very good at doing this type of mapping (like AutoMapper), but for this example the amount of code to map between the types is trivial. For reference, what follows is the data entity.

public class NewCustomer
{
        public string CustomerNumber { get; set; }
        public string FullName { get; set; }
        public string Address { get; set; }
}

Opinionated Note: Presentation models and data entities should be separated into their respective layers. It is possible to create a single CustomerModel and use it for both presentation and data layers to reduce code duplication, but I highly discourage this practice.

View

The last and final piece to the MVVM pattern is the View. The View in the context of Blazor is either a Page or Component, which is either a .razor file, or a .cshtml file and contains Razor code. Razor code is a mix of C# and HTML markup. In the context of this article, our view will be a customer form that can be filled out. There is also a button that calls the ViewModel’s Create() method when the form has been filled out properly according to the validation rules.

@page "/customer/create"
@using HappyStorage.Common.Ui.Customers
@using HappyStorage.BlazorWeb.Components
@inject Microsoft.AspNetCore.Components.IUriHelper UriHelper
@inject HappyStorage.Common.Ui.Customers.ICustomerCreateViewModel viewModel
 
<h1>Create Customer</h1>
 
<EditForm Model="@viewModel.NewCustomer" OnValidSubmit="@HandleValidSubmit">
    <DataAnnotationsValidator />
    <ValidationSummary />
    <div class="form-group">
        <h3>Name</h3>
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.FirstName)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.FirstName" />
 
        <LabelComponent labelFor="(() => viewModel.NewCustomer.LastName)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.LastName" />
    </div>
    <div class="form-group">
        <h3>Address</h3>
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.Street)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.Street" />
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.City)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.City" />
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.State)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.State" />
 
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.PostalCode)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.PostalCode" />
    </div>
    <br />
    <button class="btn btn-primary" type="submit">Submit</button>
    <button class="btn" type="button" onclick="@ReturnToList">Cancel</button>
</EditForm>

The first thing to note is at the top of the code. This is how we use dependency injection to get an instance of our ViewModel.

@inject HappyStorage.Common.Ui.Customers.ICustomerCreateViewModel viewModel

Easy! Next, we need to create the form. The  needs an instance of a model to bind to, our NewCustomer ViewModel, and a method to call when the user submits a valid form.

<EditForm Model="@viewModel.NewCustomer" OnValidSubmit="@HandleValidSubmit">
...
</EditForm>

Next, we bind each property to their respective input fields. Blazor has some built-in   helpers which help you accomplish the binding. They are still under development and you may find some features are lacking at the time of writing. Please refer to the docs in the note below for more up-to-date info.

Note: The  is something I’ve created as a replacement for the asp-for  tag-helper that retrieves the DisplayAttribute from the presentation model classes. That code is available in the GitHub repository listed at the top.

<LabelComponent labelFor="@(() => viewModel.NewCustomer.FirstName)" />
<InputText class="form-control" bind-Value="@viewModel.NewCustomer.FirstName" />
 
<LabelComponent labelFor="(() => viewModel.NewCustomer.LastName)" />
<InputText class="form-control" bind-Value="@viewModel.NewCustomer.LastName" />

The magic here is bind-Value which binds our  text box to the value of the ViewModel’s instance of the NewCustomerModel presentation model.

Note: You can view full documentation on Blazor Forms and Validation here.

Last but not least, we’ll need some code to call our ViewModel’s Create() method when the form is submitted and valid. We’ll also need the onclick=ReturnToList I’ve defined for the Cancel button.

@functions {
    private void HandleValidSubmit()
    {
        viewModel.Create();
        ReturnToList();
    }
 
    private void ReturnToList()
    {
        UriHelper.NavigateTo("/customers");
    }
}

Conclusion

That’s it! In summary, I’ve covered what MVVM is, how Blazor can benefit from it, as well as an in-depth look at a simple example of how we can create a form with validation and rich feedback to the user. It is also important to reiterate that this example works not only in Blazor but can also be used in Windows Presentation Foundation (WPF) desktop applications as well as on other platforms. Please check out the GitHub repository as I continue to develop and expand on this concept.

Developer Gotchas

by Morgan Baker 

Working with a new framework like Blazor always has its learning experiences. The goal of this section is to help alleviate headaches by providing common problems and solutions we encountered with Blazor.

  • My localization isn’t working!
    For this problem, check your route parameters. Depending on the type of parameter, the invariant culture is used by the route by default, allowing for no localization for URLs. This can be solved by allowing the parameter to be passed in as any type, and then validating the type in C# code before using it.
  • I can’t debug my C# code!
    Server-side debugging for Blazor doesn’t exist yet, but you’ll still be able to debug the whole application (assuming your server-side is using ASP.NET Core).
  • I can’t see my C# in the browser!
    C# code in Blazor is compiled through WebAssembly before being delivered to the browser. When this happens, the C# can’t be displayed in the browser. However, you can still see the code in Chrome through remote debugging. Follow these steps.
  • Why isn’t my new route working?
    Most of the time you’ll need to rebuild the application to get new routes on development applications. Other causes might be naming problems or a problem with the route parameter types.
  • Everything seems to be loading slow
    This can be multiple issues, some of which are not Blazor-specific. However, for the Blazor-specific issues, it varies between server and client. Any page using server-side Blazor must make an HTTP call to the server, which deals a hit to performance. Any site using client-side Blazor will have a long initial load time, then be more relaxed later.
  • I’m seeing a blank page and I set everything up correctly!
    This is a specific one that I ran into when first using the templates in Visual Studio 2019. The solution was making sure I had the right .NET Core SDK installed. You can have the wrong version and still create a Blazor website with no errors, at least until the app starts running. You can install the latest version of the .NET Core SDK here.

Online Resources

by JP Roberts III 

As of the writing of this blog post, Blazor is still a new framework, and as such, is still changing rapidly. Pluralsight doesn’t have any courses covering Blazor, Udemy only has a couple of short videos, and Microsoft’s Learning site has no specific courses dedicated to Blazor.

However, there are several websites that have a good deal of information and samples for developers:

YouTube also has several informative videos on Blazor, such as a two-part series on the Microsoft Visual Studio channel: Blazor – Part 1 , Blazor – Part 2.

In this episode of the Azure Government video series, Steve Michelotti sits down with AIS’ very own Vishwas Lele to discuss migrating and modernizing with Kubernetes on Azure Government. You’ll learn about the traditional approaches for migrating workloads to the cloud, including:

1. Rehost
2. Refactor
3. Reimagine

You will also learn how Kubernetes provides an opportunity to fundamentally rethink these traditional approaches to cloud migration by leveraging Kubernetes in order to get the “best of all worlds” in the migration journey. If you’re looking to migrate your existing legacy workloads to the cloud, while minimizing code changes and taking advantage of innovative cloud-native technologies, this is the video you should watch!

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

Meet some of the AIS Recruiting Team – They’re going to talk you through some of their top recommended job interview tips.

(Transcript)

My name is Francesca Hawk. My name is Rana Shahbazi. My name is Kathleen McGurk. My name is Jenny Wan. My name is Denise Kim.

Tip #1: Be Open, Transparent & Direct

I think it’s important for candidates to be authentic and transparent throughout the entire interview process.

Keeping the line of communication open through the interview process is really important for both sides. If you have other opportunities on the table, say that. The recruiters are your advocates and an essence kind of your best friend. Being direct – give us, you know, enough feedback – if you are not interested, or if you if the commute is an issue, or if you want more money, if your clearance was an issue – just let us know.

Tip #2: Know What You Want

So before even searching for opportunities you have to figure out what you’re looking for in a company. And then once you figure out what you’re looking for – whether it’s the culture of the company, the location the company – definitely asked questions with the recruiter prior to the interview so while you’re at the interview you have a little bit of that information.

Tip #3: Be On Time & Be Prepared

You always want to make sure you’re on time. Generally, you want to arrive about 15 minutes before your interview. You know where you’re going to park, make sure that you look up directions ahead of time. And just be prepared in general.

Preparation is extremely underrated in the interview process, so really doing your research getting familiar with the company and the culture there. Go online. Check out, you know, the general website, check out the job description. Make sure you’re aware of the skills and qualifications and what they’re really looking for. Glassdoor always provides really good reviews from the current employees. I think the company website and certainly LinkedIn is a huge aspect – social media in general.

Tip #4: Ask Questions

Ask questions or have questions ready to us ask. Ask about the process ask about the expectations who you’ll be potentially meeting with, what the potential duration could be. The company can’t provide information unless you ask for it.

You also have to make sure that you are interviewing the company just as much as they’re interviewing you. Ask the interviewers is about the culture because you’re going to get a different response from everybody but if they all seem to check out or are the same then that means the culture is pretty good.

Just make sure that you feel comfortable with the environments that, you know, you’re going to be working in.

Tip #5: Make Sure You Understand the Role

Really use the opportunity to understand the position and then to sell your strengths and also kind of tie it back into your accomplishments.

Make sure that you talk about what you were individually able to accomplish in a project. So you were personally able to
bring to the table and not necessarily what the team accomplished as a whole.

Tip #6: Show Your Interest

I think your presentation and the way you present yourself to the interviewers and anybody that you interact within the interview process is extremely important.

So not just what you say, but how you say it. Eye contact and body language say a lot about your interest in the position and the company as a whole.

Showing your interest makes a recruiter feel that you’re confident and that you can certainly do the role, and also that you are
excited about this opportunity.

I think you should be excited about interviewing a company that you’re interested in. And that sounds silly, but I think that going in excited and I think that’s why body language and eye contact are all very important aspects.

Tip #7: Listen

People are so busy thinking about what they’re going to say next that they don’t actually pay attention to the questions being asked.

So making sure that you’re hearing what they’re saying and then taking the time to respond is really important.

Tip #8: Follow Up

Certainly, you know, asking for next steps is very helpful and also that is another way of expressing your interest. You know, definitely being responsive. I would say the general rule of thumb is within 12 hours of turnaround time. If you’re not interested
and that’s okay if we’re not at AIS where this opportunity is not number one and that’s okay, we like to know that as well.

You definitely want to send a thank you note – it goes a long way and it shows you’re very interested in the company and it always leaves a great impression.

We’re Hiring!

AIS is always looking to connect with talented technologists that are passionate about learning and growing to staff exciting new projects for our commercial and federal clients. If you’re interested in working at AIS, check out our current career openings.

We’re proud to announce that AIS has successfully renewed all six of our Microsoft Gold Partner competencies for 2019. AIS has been consistently recognized as a Microsoft Gold Partner for many years now, and we’re currently distinguished at the Gold level for:

    • DevOps
    • Cloud Platform
    • Cloud Productivity
    • Application Development
    • Application Integration
    • Collaboration and Content

Microsoft Gold Partner Logo

The Microsoft Partner Program: Defining the Levels of Excellence

Each of these achievements is an important benchmark in the competitive world of Microsoft technology partners. Every year, Microsoft evaluates our staff, our project history, and our customer references. A single Gold competency requires employees to hold multiple Microsoft Certified Professional (MCP) certifications, five in-depth customer references, numerous developer exams, and other objectives.

We’re proud that over 70% of our staff maintains relevant certifications, validating our knowledge and expertise and allowing us to reach the Gold level across so many areas of our business. Congrats to the entire AIS team for once again bringing home the Gold!

Interested in learning more about our involvement as a certified Microsoft Gold Partner? Click here to get in touch with a solutions executive or give us a call today at 703-860-7800.

Calling all developers, tech professionals, and IT and business leaders! February 4-5, 2019, Microsoft is hosting the Ignite the Tour DC event in Washington, D.C. at the Walter E. Washington Convention Center.

This event is government-focused, delivering 100+ deep-dive sessions and workshops from over 350 professionals to help you meet your mission. The event is FREE, but you will need a ticket. (Note, this is currently sold out, but you can join the waitlist here.)

About the Session: Migrate and Modernize with Kubernetes in Azure Government

CTO, Vishwas Lele will be joined by Microsoft’s Steve Michelotti to present on the topic “Migrate and Modernize with Kubernetes in Azure Government” Tuesday, February 5, 2019, from 12:50 PM to 1:50 PM.

If you are overwhelmed by the daunting prospects of migrating your on-premises workloads to the cloud, confused by what approaches to take, or torn between doing a lift-and-shift to the cloud versus modernizing your architectures — this session is for you!

During this session, we’ll show you how you can utilize cloud-native technologies to migrate your workloads to Azure to realize significant cost savings, requiring minimal code changes, moving your organization a step closer to modernization using cloud-native technologies. The presentation will be demo-heavy, giving you an inside look at using Kubernetes to migrate your workloads to Azure Government.

Stop by Booth #58 to See Us

We hope to see you there! You can find us at Booth #58, the closest one to the “Fun Lounge.” Already attending? Let us know you’re coming — we can schedule some time to talk.

Last week we laid out some basics of what we call the “Full PaaS” approach to legacy app modernization. While it might not make sense in every situation, we recently completed a modernization effort using the Full PaaS approach. Here’s some background and the steps we took…

Stop Playing Legacy App “Whack-a-Mole”

Our enterprise customer developed and owned a budgeting application. The application was over five years old and built on tech that — while modern at the time — had become “stale” over the years.  Usage patterns for the application included huge spikes in demand during specific times of the year, and the need to meet these demands prompted the team to “reactively” invest in servers with more memory, better networking equipment, and other fixes. Problems were only addressed as they cropped up, with no time for long-term planning.

Yet despite throwing money and equipment at those problems, the issues with the platform continued, while customers were demanding more functionality. Unfortunately, since most of the application team’s time was spent reacting to operating issues, that simply couldn’t happen. Additionally, the application team’s O&M budget shrank over time, leaving a smaller staff responsible for the application.

After analyzing the application, we determined that three tiers of the application (compute, cache, database) could be moved to the “as-a-service” model with a reasonable amount of refactor, for two main reasons:

  • This model would address the seasonal demand challenges by leveraging “auto-scaling” capabilities built directly into the services the application would now be consuming.
  • As an added benefit, these services allowed scripted deployments, automated monitoring, and easier provisioning of “test” environments to get new features in the hands of users more quickly.

The 7 Steps to “Full-PaaS” Modernization

Once we chose the Full PaaS approach, we completed the modernization effort by following these seven (high level) steps:

  1. Analyze application dependencies: This includes compute and data tiers, software architecture, reliance on existing resources.
  2. Identify services to replace legacy components: Not everything will port directly, so map out your replacements ahead of time.
  3. Establish candidate PaaS architecture: Choose your cloud platform, specific services to be used, and architect the flow of communication between the services.
  4. Validate with internal stakeholders: We talk to everyone from operations staff to security to business users of the application.
  5. Refactor your application code: Target the PaaS services included in the candidate architecture.
  6. Automate your application delivery and integrate CI/CD: This is one of the biggest benefits to this modernization approach, so take full advantage of it!
  7. Establish a living roadmap for ongoing improvement: We’re looking for both ongoing improvements to delivery automation and for any additional applications that can also adopt the model.

DISCOVER THE RIGHT APPROACH FOR MODERNIZING YOUR APPLICATIONS
Download our free whitepaper to explore the various approaches to app modernization, where to start, how it's done, pros and cons, challenges, and key takeaways