As we are still adjusting to remote work, it has allowed me to uncover some extra time to record a Power Platform Administration Foundation Pluralsight course to complement the “dev side” course released in 2018, Low-Code Development with Power Apps.

This course is designed for developers–both citizen and professional–interested in a low-code approach for building mobile applications. In this course, I used aspects of Azure PowerShell/CLI in demonstrating Power Platform management as I see a commonality between PowerApps and Azure Administration.

You can find the Pluralsight course here.

My daughter drew the following sketch to captures the principal motivation for why I created this course, which was to help the Power Platform admins attain equilibrium. Today, the balance seems to be tilting towards the citizen developers, making IT departments uncomfortable with moving forward with the Microsoft Power Platform adoption.

The  Power Platform Adoption Framework created by Andrew Welch and the team speaks to the “adoption at scale” theme and an approach to enterprise management and governance that resonates with customers and puts IT and security departments at ease.

This course is about teaching admins the basic skills needed to effectively administer the Power Platform. My hope is that making the Power Platform admin tools somewhat match that of Azure, acceptance of the platform may be a bit easier.

The current approach of mass quarantining is to “flatten the curve.” However, learning about how the virus has spread and how it can return with the eventual restoration of our economy is something that is still blurry. Recent work by Abbott Labs (among others) shows that shortening testing times and mass-producing testing kits at affordable prices look promising.

However, despite the advancements by Abbott Labs, it is unattainable to test everyone in America. As of today, April 5th, we have tested, on average, one in every two hundred Americans. This can be compared to a test rate like South Korea. The ramp-up in testing has not allowed moving closer to reopening our economy.

Some have proposed the idea of checking for antibodies. This test would suggest immunity to the virus because of a prior infection. The logic behind this is that people that have these antibodies can safely return to work and take on the critical tasks needed to restart the economy. Nonetheless, recent news from across Asia warns us that patients previously diagnosed with COVID-19 are being readmitted to hospitals after testing positive for the virus again.

So as it stands, our current approach to mass quarantining from what the media outlets have predicted to be up to twelve-months is not only slow but is also pushing us down a path of economic damage. If that continues, this may be difficult to recover from. Scaling up and developing new methods of testing that check for antibodies, while advantageous, will not be by itself enough to reopen our economy.

An Aggressive Data-Vision Approach is Needed

An aggressive data-driven approach to understand how COVID-19 is spreading should be suggested. Based on these insights, we can demarcate safe zones where seminal economic activity can be reinstituted with minimal risk. There are two aspects of this approach:

  1. We can develop more in-depth insights into how the virus is spreading. We must acknowledge that mass quarantining alone is not the best approach.
  2. Based on the insights we develop, we may be able to open parts of our country once again, with a measure of confidence based on the data.

Ultimately, the solution boils down to data and computation problems. Imagine if we took the phone numbers of everyone infected with COVID-19 (of course, using an anonymized identifier rather than the actual phone numbers for protecting the privacy of folks involved). Then, using cell tower data to gather movement of those individuals based on their smart-phones, we will perform location and time-based searches. This would determine who might have come in contact with infected persons in the last forty-five days. Then, we will algorithmically place the search results dataset into bands based on the degree of overlap (time and location). This algorithm will be able to eliminate instances of location proximity where there is minimal chance of spread for example, at a traffic intersection. Conversely, this algorithm will accord a higher weight to location proximity based on where there is a bigger chance of the virus spreading. For example, at a coffee shop or workplace. All these factors will lead to the calculation of a risk factor. Individuals that meet the high-risk criteria will be notified. Any individual who receives the notification will be instructed to self-quarantine immediately. We can go further and penalize them if they don’t follow the suggestion, using the cell phone data. These individuals should be sent a self-test kit on a priority basis.

If these individuals test positive, their immediate family would then receive instant notification to also self-quarantine. The location in which this individual came into contact with the COVID-19 infected patient that initiated this search will be notified as well. If they test negative, we will still learn a vital data point is how the virus is spreading. These learnings, including the test results, will be fed into a continuously retraining machine learning algorithm. This algorithm will keep track of the trajectory of an infected person and common intersection locations. Additionally, this algorithm will also be able to account for an infected person being quarantined, thus neutralizing a virus carrier from the mix. In summary, this algorithm is akin to performing deep automated contact tracing at a level that cannot be matched by armies of volunteers.

Another important byproduct of the trained algorithm is the automatic extraction of “features”. In machine learning, a feature is an individual measurable property or characteristic of a phenomenon being observed [1]. For example, the algorithm will observe that many people are becoming infected, without coming in direct contact with an already infected person. Based on observing millions of such data points, it can, on its own, identify discriminating features such as an infected mailman route and common meeting areas that include certain surfaces like metals where coronavirus can remain active for days.

Using a continuously retraining algorithm, we can start to open parts of the country where the threat of spread is low. Any discovery of a COVID-19 case in a low-risk area will trigger the actions mentioned above and will flow back as input to training. It should be evident that the dataset and algorithm described above is computationally challenging. We are talking about recursive data searches through a dataset comprised of millions of citizens and a continuously learning algorithm with potentially billions of hyperparameters.

But Hasn’t This Approach Already Been Used in Other Countries like Taiwan and Singapore?

There is no question that location tracking capabilities have been highly effective in controlling the spread of coronavirus. In Taiwan and Singapore, location tracking technologies were used very early in the outbreak and mainly used for surveillance. In Korea, officials routinely send text messages to people’s phones alerting them on newly confirmed infections in their neighborhood — in some cases, alongside details of where the unnamed person had traveled before entering quarantine. Based on my research, these countries did not rely on big data and deep learning techniques to derive insights from the data. In the case of Taiwan and Singapore, the dataset of infected persons is not large enough for such an analysis.

Summary

The U.S. Government has broad authority to request personal data in the case of a national emergency like the Coronavirus. In the United States, phone companies such as AT&T and Verizon have extensive records on their customer’s movements. However, it does not appear that we are leveraging the large body of people’s movement data to combat coronavirus. According to a recent Washington Post story, “AT&T said it has not had talks with any government agencies about sharing this data for purposes of combating coronavirus. Verizon did not respond to requests for comment.”

The goal of this post is to engender a collaborative discussion with experts in big data, ML and medicine. Hopefully, there are efforts already underway based on a similar or better idea. Please send your comments via twitter @vlele.

Note: This blog post is *not* about Kubernetes infrastructure API (an API to provision a Kubernetes cluster). Instead, this post focuses on the idea of Kubernetes as a common infrastructure layer across private and public clouds.

Kubernetes is, of course, well known as the leading open-source system for automating deployment and management of containerized applications. However, its uniform availability, is for the first time, giving customers a “common” infrastructure API across public and private cloud providers. Customers can take their containerized applications, Kubernetes configuration files, and for most parts, move to another cloud platform. All of this without sacrificing the use of cloud provider-specific capabilities, such as storage and networking, that are different across each cloud platform.

At this time, you are probably thinking about tools like Terraform and Pulumi that have focused on abstracting underlying cloud APIs. These tools have indeed enabled a provisioning language that spans across cloud providers. However, as we will see below, Kubernetes “common” construct goes a step further – rather than be limited to a statically defined set of APIs, Kubernetes extensibility allows or extends the API dynamically through the use plugins, described below.

Kubernetes Extensibility via Plugins

Kubernetes plugins are software components that extend and deeply integrate Kubernetes with new kinds of infrastructure resources. Plugins realize interfaces like CSI (Container Storage Interface). CSI defines an interface along with the minimum operational and packaging recommendations for a storage provider (SP) to implement a compatible plugin.

Another example of interfaces includes:

  • Container Network Interface (CNI) – Specifications and libraries for writing plug-ins to configure network connectivity for containers.
  • Container Runtime Interface (CRI) – Specifications and libraries for container runtimes to integrate with kubelet, an agent that runs on each Kubernetes node and is responsible for spawning the containers and maintaining their health.

Interfaces and compliant plug-ins have opened the flood gates to third-party plugins for Kubernetes, giving customers a whole range of options. Let us review a few examples of “common” infrastructure constructs.

Here is a high-level view of how a plugin works in the context of Kubernetes. Instead of modifying the Kubernetes code for each type of hardware or a cloud provider offered service, it’s left to the plugins to encapsulate the knowhow to interact with underlying hardware resources. A plugin can be deployed to a Kubernetes node as shown in the diagram below. It is the kubelet’s responsibility to advertise the capability offered by the plugin(s) to the Kubernetes API service.

Kubernetes Control Panel

“Common” Networking Construct

Consider a networking resource of type load balancer. As you would expect, provisioning a load balancer in Azure versus AWS is different.

Here is a CLI for provisioning ILB in Azure:

CLI for provisioning ILB in Azure

Likewise, here is a CLI for provisioning ILB in AWS:

CLI for provisioning ILB in AWS

Kubernetes, based on the network plugin model, gives us a “common” construct for provisioning the ILB that is independent of the cloud provider syntax.

apiVersion: V1

“Common” Storage Construct

Now let us consider a storage resource type. As you would expect, provisioning a storage volume in Azure versus Google is different.

Here is a CLI for provisioning a disk in Azure:

CLI for provisioning a disk in Azure

Here is a CLI for provisioning a persistent disk in Google:

CLI for provisioning a persistent disk in Google

Once again, under the plugin (device) model, Kubernetes gives us a “common” construct for provisioning storage that is independent of the cloud provider syntax.

In the example below, of “common” storage construct across cloud providers. In this example, a claim for a persistent volume of size 1Gi and access mode “ReadWriteOnce” is being made. Additionally, storage class “cloud-storage” is associated with the request. As we will see next, the persistent volume claims decouple us from the underlying storage mechanism.

cloud-storage-claim

The StorageClass determines which storage plugin gets invoked to support the persistent volume claim. In the first example below, StorageClass represents the Azure Disk plugin. In the second example below, StorageClass represents the Google Compute Engine (GCE) Persistent Disk.

StorageClass

StorageClass Diagram 2

“Common” Compute Construct

Finally, let us consider a compute resource type. As you would expect, provisioning a compute resource in Azure versus GCE is different.

Here is a CLI for provisioning a GPU VM in Azure:

CLI for provisioning a GPU VM in Azure

Here is a CLI for provisioning a GPU in Google Cloud:

CLI for provisioning a GPU in Google Cloud:

Once again, under the plugin (device) model, Kubernetes, gives us a “common” compute construct across cloud providers. In this example below, we are requesting a compute resource of type GPU. An underlying plugin (Nvidia) installed on the Kubernetes node is responsible for provisioning the requisite compute resource.

requesting a compute resource of type GPU

Source: https://docs.microsoft.com/en-us/azure/aks/gpu-cluster

Summary

As you can see from the examples discussed in this post, Kubernetes is becoming a “common” infrastructure API across private, public, hybrid, and multi-cloud setups. Even traditional “infrastructure as code” tools like Terraform are building on top of Kubernetes.

Azure Arc is one of the significant announcements coming out of #msignite this week. As depicted in the picture below, Azure Arc is a single control plane across multiple clouds, premises, and the edge.

Azure Arc

Source: https://azure.microsoft.com/en-us/services/azure-arc/

But we’ve seen single control planes before, no?

That is correct. The following snapshot (from 2013) shows App Controller securely connected to both on-premise and Microsoft Azure resources.

Azure App Controller in 2013

Source: https://blogs.technet.microsoft.com/yungchou/2013/02/18/system-center-2012-r2-explained-app-controller-as-a-single-pane-of-glass-for-cloud-management-a-primer/

So, what is different with Azure Arc?

Azure Arc is not just a “single-pane” of control for cloud and on-premises. Azure Arc takes Azure’s all-important control plane – namely, the Azure Resource Manager (ARM) – and extends it *outside* of Azure. In order to understand the implication of the last statement, it will help to go over a few ARM terms.

Let us start with the diagram below. ARM (shown in green) is the service used to provision resources in Azure (via the portal, Azure CLI, Terraform, etc.). A resource can be anything you provision inside an Azure subscription. For example, SQL Database, Web App, Storage Account, Redis Cache, and Virtual Machine. Resources always belong to a Resource Group. Each type of resource (VM, Web App) is provisioned and managed by a Resource Provider (RP). There are close to two hundred RPs within the Azure platform today (and growing with the release of each new service).

ARM

Source: http://rickrainey.com/2016/01/19/an-introduction-to-the-azure-resource-manager-arm/

Now that we understand the key terms associated with ARM, let us return to Azure Arc. Azure Arc takes the notion of the RP and extends it to resources *outside* of Azure. Azure Arc introduces a new RP called “Hybrid Compute”. See the details for the RP HybridCompute in the screenshot below. As you can imagine, the HybridCompute RP is responsible for managing the resources *outside* of Azure. HybridCompute RP manages the external resources by connecting to the Azure Arc agent, deployed to the external VM. The current preview is limited to Windows or Linux VM. In the future, the Azure Arc team plans to support containers as well.

RP Hybrid Compute Screenshot

Note: You will need to first to register the provider using the command az register -n Microsoft.HybridCompute

Once we deploy the Azure Arc agent [1] to a VM running in Google Cloud, it shows inside Azure Portal within the resource group “az_arc_rg” (see screenshot below). Azure Arc agent requires connectivity to Azure Arc service endpoints for this setup to work. All connections are outbound from the agent to Azure and are secured with SSL. All traffic can be routed via an HTTPS proxy.

deploy the Azure Arc agent [1] to a VM running in Google cloud

Since the Google Cloud hosted VM (gcp-vm-001) is an ARM resource, it is an object inside Azure AD. Furthermore, there can be a managed identity associated with Google VM.

Benefits of Extending ARM to Resources Outside Azure:

  • Ability to manage external VMs as ARM resources via using Azure Portal / CLI, as well as, the ability to add tags, as shown below.Ability to manage external VMs as ARM resources using Azure Portal
  • Ability to centrally manage access and security policies for external resources with Role-Based Access Control.manage access and security policies for external resources with Role-Based Access Control
    Microsoft Hybrid Compute Permissions
  • Ability to enforce compliance and simplify audit reporting.Ability to enforce compliance and simplify audit reporting

[1] Azure Arc Agent is installed by running the following script on the remote VM. This script is generated from the Azure portal:

# Download the package:

Invoke-WebRequest -Uri https://aka.ms/AzureConnectedMachineAgent -OutFile AzureConnectedMachineAgent.msi

# Install the package:

msiexec /i AzureConnectedMachineAgent.msi /l*v installationlog.txt /qn | Out-String

# Run connect command:

"$env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "az_arc_rg" --tenant-id "" --location "westus2" --subscription-id ""
Late last Friday, the news of the Joint Enterprise Defense Infrastructure (JEDI) contract award to Microsoft Azure sent seismic waves through the software industry, government, and commercial IT circles alike.

Even as the dust settles on this contract award, including the inevitable requests for reconsideration and protest, DoD’s objectives from the solicitation are apparent.

DOD’s JEDI Objectives

Public Cloud is the Future DoD IT Backbone

A quick look at the JEDI statement of objectives illustrates the government’s comprehensive enterprise expectations with this procurement:

  • Fix fragmented, largely on-premises computing and storage solutions – This fragmentation is making it impossible to make data-driven decisions at “mission-speed”, negatively impacting outcomes. Not to mention that the rise in the level of cyber-attacks requires a comprehensive, repeatable, verifiable, and measurable security posture.
  • Commercial parity with cloud offerings for all classification levels – A cordoned off dedicated government cloud that lags in features is no longer acceptable. Furthermore, it is acceptable for the unclassified data center locations to not be dedicated to a cloud exclusive to the government.
  • Globally accessible and highly available, resilient infrastructure – The need for infrastructure that is reliable, durable, and can continue to operate despite catastrophic failure of pieces of infrastructure is crucial. The infrastructure must be capable of supporting geographically dispersed users at all classification levels, including in closed-loop networks.
  • Centralized management and distributed control – Apply security policies; monitor security compliance and service usage across the network; and accredit standardized service configurations.
  • Fortified Security that enables enhanced cyber defenses from the root level – These cyber defenses are enabled through the application layer and down to the data layer with improved capabilities including continuous monitoring, auditing, and automated threat identification.
  • Edge computing and storage capabilities – These capabilities must be able to function totally disconnected, including provisioning IaaS and PaaS services and running containerized applications, data analytics, and processing data locally. These capabilities must also provide for automated bidirectional synchronization of data storage with the cloud environment when a connection is re-established.
  • Advanced data analytics – An environment that securely enables timely, data-driven decision making and supports advanced data analytics capabilities such as machine learning and artificial intelligence.

Key Considerations: Agility and Faster Time to Market

From its inception, with the Sep 2017 memo announcing the formation of Cloud Executive Steering Group that culminated with the release of RFP in July 2018, DoD has been clear – They wanted a single cloud contract. They deemed a multi-cloud approach to be too slow and costly. Pentagon’s Chief Management officer defended a single cloud approach by suggesting that multi-cloud contract “could prevent DoD from rapidly delivering new capabilities and improved effectiveness to the warfighter that enterprise-level cloud computing can enable”, resulting in “additional costs and technical complexity on the Department in adopting enterprise-scale cloud technologies under a multiple-award contract. Requiring multiple vendors to provide cloud capabilities to the global tactical edge would require investment from each vendor to scale up their capabilities, adding expense without commensurate increase in capabilities”

A Single, Unified Cloud Platform Was Required

The JEDI solicitation expected a unified cloud platform that supports a broad set of workloads, with detailed requirements for scale and long-term price projections.

  1. Unclassified webserver with a peak load of 400,000 requests per minute
  2. High volume ERP system – ~30,000 active users
  3. IoT + Tactical Edge – A set of sensors that captures 12 GB of High Definition Audio and Video data per hour
  4. Large data set analysis – 200 GB of storage per day, 4.5 TB of online result data, 4.5 TB of nearline result data, and 72 TB of offline result data
  5. Small form-factor data center – 100 PB of storage with 2000 cores that is deliverable within 30 days of request and be able to fit inside a U.S. military cargo aircraft

Massive Validation for the Azure Platform

The fact that the Azure platform is the “last cloud standing” at the end of the long and arduous selection process is massive validation from our perspective.

As other bidders have discovered, much to their chagrin, the capabilities described above are not developed overnight. It’s a testament to Microsoft’s sustained commitment to meeting the wide-ranging requirements of the JEDI solicitation.

Lately, almost every major cloud provider has invested in bringing the latest innovations in compute (GPUs, FPGAs, ASICs), storage (very high IOPS, HPC), and network (VMs with 25 Gbps bandwidth) to their respective platforms. In the end, what I believe differentiates Azure is a long-standing focus on understanding and investing in enterprise IT needs. Here are a few examples:

  • Investments in Azure Stack started 2010 with the announcement of Azure Appliance. It took over seven years of learnings to finally run Azure completely in an isolated mode. Since then, the investments in Data Box Edge, Azure Sphere and commitment to hybrid solutions have been a key differentiator for Azure.
  • With 54 Azure regions worldwide that ( available in 140 countries) including dedicated Azure government regions – US DoD Central, US DoD East, US Gov Arizona, US Gov Iowa, US Gov Texas, US Gov Virginia, US Sec East, US Sec West – Azure team has accorded the highest priority on establishing a global footprint. Additionally, having a common team that builds, manages, and secures Azure’s cloud infrastructure has meant that even the public Azure services have DoD CC SRG IL 2, FedRAMP moderate and high designations.
  • Whether it is embracing Linux or Docker, providing the highest number of contributions to GitHub projects, or open-sourcing the majority of  Azure SDKs and services, Microsoft has demonstrated a leading commitment to open source solutions.
  • Decades of investment in Microsoft Research, including the core Microsoft Research Labs and Microsoft Research AI, has meant that they have the most well-rounded story for advanced data analytics and AI.
  • Documentation and ease of use have been accorded the highest engineering priorities. Case in point, rebuilding Azure docs entirely on Github. This has allowed an open feedback mechanism powered by Github issues.
This case study was featured on Microsoft. Click here to view the case study.

When GEICO sought to migrate its sales mainframe application to the cloud, they had two choices. The first was to “rehost” their applications, which involved recompiling the code to run in a mainframe emulator hosted in a cloud instance. The second choice was to “rebuild” their infrastructure and replace the existing mainframe functionality with equivalent features build using cloud-native capabilities.

The rehost approach is arguably the easier of the two alternatives. Still, it comes with a downside – the inability to leverage native cloud capabilities and benefits like continuous integration and deployment that flow from it.

Even though a rebuild offers the benefits of native cloud capabilities, it is riskier as it adds cost and complexity. GEICO was clear that it not only wanted to move away from the mainframe codebase but also significantly increase the agility through frequent releases. At the same time, risks of the “rebuild” approach involving a million lines of COBOL code and 16 subsystems were staring them in their faces.

This was when GEICO turned to AIS. GEICO hired AIS for a fixed price and time engagement to “rebuild” the existing mainframe application. AIS extracted the business logic from the current system and reimplemented it from ground-up using modern cloud architecture and at the same time, baking in the principles of DevOps and CI/CD from the inception.

Together GEICO and AIS teams achieved the best of both worlds – a risk mitigated “rebuild” approach to mainframe modernization. Check out the full success story on Microsoft, GEICO finds that the cloud is the best policy after seamless modernization and migration.

DISCOVER THE RIGHT APPROACH FOR MODERNIZING YOUR APPLICATIONS
Download our free whitepaper to explore the various approaches to app modernization, where to start, how it's done, pros and cons, challenges, and key takeaways

Last week, AIS was invited to speak to at DIR one day conference for public sector IT leaders with the theme Journey to the Cloud: How to Get There.

AIS’ session was titled Improve Software Velocity and Portability with Cloud Native Development. As we know, with a significant increase in the adoption of containers, cloud-native development is emerging as an important trend with customers who are looking to build new applications or lift and reshape existing applications in the cloud.

In this session, we discussed the key attributes of the cloud-native approach and how it improves modularity, elasticity, robustness, and – at the same time – helps avoid cloud vendor lock-in.

Cloud-Native Attributes

  • Declarative formats to setup automation
  • Clean contract with the underlying operating system, offering portability
  • Minimize divergence between development and production, enabling continuous deployment
  • Scale-up without significant changes to tooling, architecture, or development practices
  • Containers, service meshes, and microservices
  • Public, private, and hybrid cloud

Multi-Cloud Deployment Demo

To demonstrate cross-cloud portability, we conducted a live demonstration of an application being deployed to the Google Cloud Platform and Microsoft Azure during the session.

Note that the exact same application (binaries and Kubernetes deployment manifests) were used across the two clouds. The only difference between the storage provisioner as shown in the diagram below.

deployment on multi cloud

We thank Texas DIR forum organizers and attendees for a great conference!

Related Resources

For additional information on cloud-native development refer to links below:

Since Steve Michelotti and I recorded the Migrate & Modernize with Kubernetes on Azure Gov Video on Microsoft’s Channel 9, production-level support for Windows Nodes has been added. So we decided to “upgrade” the Music Store app to take advantage of this capability.

Watch this short six-minute video to understand the changes involved.

In the previous version of this video, we had converted this application to containers in Kubernetes. However, at that time, Windows Nodes for Kubernetes were not available. Since then, Kubernetes 4.14 has come out with support for Windows Nodes. This video walks through the upgrade of this application in moving this application directly into a Windows Node pool in AKS.

This update gives us greater flexibility in migration applications to Kubernetes and hosting them there while keeping the functionality of the application the same.

I hope you find this useful. Thanks!

Foundational Knowledge of CDS

At the core of any low-code development platform, such as the Microsoft Power Platform, is the ability to secure, easily store, and manage data used by business applications. This is where Common Data Service (CDS) comes in. In this course, Introduction to Common Data Service (CDS), you will learn the skills you need to work effectively with CDS.

  • First, you’ll learn about the critical use cases and the high-level architecture of CDS in the context of the overall Power Apps Platform.
  • Next, you’ll explore CDS’ capabilities like entities, relationships, auditing, annotations, REST API, business rules, and workflow.
  • Finally, you’ll learn about the security, governance, and integration aspects of CDS.

When you’re finished with this course, you’ll have a foundational knowledge of CDS to use in your low-code applications. This session is a beginner course, and the total duration is under 2 hours.

LEARN MORE AND REGISTER FOR THIS COURSE TODAY!

Sound Familiar?

It’s not a sentiment you would expect from most IT decision-makers. However, it’s something we hear from an increasing number of organizations.

The benefits of a well-thought-out cloud transformation roadmap are not lost on them.

  • They know that, in an ideal world, they ought to start with an in-depth assessment of their application portfolio, in line with the best practice – “migrate your capabilities, not apps or VMs”.
  • They also realize the need to develop a robust cloud governance model upfront.
  • And ultimately, they understand the need to undertake an iterative migration process that takes into account “organizational change management” best practices.

At the same time, these decision-makers face real challenges with their existing IT infrastructure that simply cannot wait months and years for a successful cloud transformation to take shape. They can’t get out of their on-premises data centers soon enough. This notion isn’t limited to organizations with fast-approaching Data Center (DC) lease renewal deadlines or end of support products, either.

So, how do we balance the two competing objectives:

  • Immediate need to move out of the DC
  • Carefully crafted long-term cloud transformation

A Two-Step Approach to Your Cloud Transformation Journey

From our experience with a broad range of current situations, goals, and challenges, we recommend a two-step cloud transformation approach that addresses both your immediate challenges and the organization’s long-term vision for cloud transformation.

  1. Tactical “Lift-n-Shift” to the Cloud – As the name suggests, move the current DC footprint as is (VMs, databases, storage network. etc.) to Azure
  2. Strategic Cloud Transformation – Once operational in the cloud, incrementally and opportunistically move parts of your application portfolio to higher-order Azure PaaS/cloud-native services

Tactical “Lift-n-Shift” to the Cloud

Lift n Shift Approach to Cloud Transformation

On the surface, step #1 above may appear wasteful. After all, we are duplicating your current footprint in Azure. But keep in mind that step #1 is designed for completion in days or weeks, not months or years. As a result, the duplication is minimized. At the same time, step #1 immediately puts you in a position to leverage Azure capabilities, giving you tangible benefits with minimal to no changes to your existing footprint.

Here are a few examples of benefits:

  • Improve the security posture – Once you are in Azure, you tap into security capabilities such as intrusion detection and denial of service attack solely by being in Azure. Notice that I deliberately did not cite Security Information and Event Management (SIEM) tools like Azure Sentinel. Technically you can take advantage of Azure Sentinel for on-premises workloads.
  • Replace aging hardware – Your hardware may be getting old but isn’t old enough for a Capex-powered refresh. Moving your VMs to Azure decouples you from the underlying hardware. “But won’t that be expensive, since you are now paying by usage per minute?” you ask. Not necessarily and certainly not in the long run. Consider options like Reserved Instance (RI) pricing that can offer an up to 80% discount based on a one- or three-year commitment.

Furthermore, you can combine RI with Azure Hybrid Benefits (AHUB) which provides discounts for licenses already owned. Finally, don’t forget to take into account the savings from decreased needs for power, networks, real estate, and the cost of resources to manage all the on-premises assets. Even if you can’t get out of the DC lease completely, you may be able to negotiate a modular reduction of your DC footprint. Please refer to Gartner research that suggests that over time, the cloud can become cost-effective.

AMP Move out of Data Center

Source – https://blogs.gartner.com/marco-meinardi/2018/11/30/public-cloud-cheaper-than-running-your-data-center/

  • Disaster Recovery (DR) – Few organizations have a DR plan setup that is conducive for ongoing DR tests. Having an effective DR plan is one of the most critical responsibilities of IT. Once again, since geo-replication is innate to Azure, your disks are replicated to an Azure region that is at least 400 miles away, by default. Given this, DR is almost out-of-the-box.
  • Extended lease of life on out of support software – If you are running an Operating System (OS), such as Windows Server 2008 or SQL Server 2008, moving to Azure extends the security updates for up to three years from the “end of support” date.
  • Getting out of the business of “baby-sitting” database servers – Azure managed instances offer you the ability to take your existing on-premises SQL Server databases and move them to Azure with minimal downtime. Once your database is an Azure SQL Managed Instance, you don’t have to worry about patching and backup, thereby significantly reducing the cost of ownership.
  • Take baby steps towards automation and self-service – Self-service is one of the key focus areas for most IT organizations. Once again, since every aspect of Azure is API driven, organizations can take baby steps towards automated provisioning.
  • Get closer to a data lake – I am sure you have heard the quote “AI is the new electricity”. We also know that Artificial Intelligence (AI) needs lots and lots of data to train the Machine Learning (ML) algorithms. By moving to Azure, it is that much easier to capture the “data exhaust” coming out the applications in a service like Azure Data Lake. In turn, Azure Data Lake can help turn this data into intelligence.

Strategic Cloud Transformation

Strategic Cloud Transformation

Once you have completed step #1 by moving your on-premises assets to the cloud, you are now in a position to undertake continuous modernization efforts aligned to your business priorities.

Common approaches include:

  • Revise – Capture application and application tiers “as-is” in containers and run on a managed orchestrator like Azure Kubernetes Service. This approach requires minimal changes to the existing codebase. For more details of this approach, including a demo, read Migrate and Modernize with Kubernetes on Azure Government.
  • Refactor – Modernize by re-architecting to target Platform as a Service (PaaS) and “serverless” technologies. This approach requires more significant recoding to target PaaS services but allows you to take advantage of cloud provider managed services. For more information, check out our “Full PaaS” Approach to Modernizing Legacy Apps.
  • Rebuild – Complete rewrite of the applications using cloud-native technologies like Kubernetes, Envoy, and Istio. Read our blog, What Are Cloud-Native Technologies & How Are They Different from Traditional PaaS Offerings, for more information.
  • Replace – Substitute an existing application, in its entirety, with Software as a Service (SaaS) or an equivalent application developed using a no-code/low-code platform.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

The following table summarizes the various approaches for modernization in terms of factors such as code changes, operational costs, and DevOps maturity.

Compare App Modernization Approaches

Azure Migration Program (AMP)

Microsoft squarely aligns with this two-step approach. At the recent Microsoft partner conference #MSInspire, Julia White announced AMP (Azure Migration Program).

AMP brings together the following:

Wrapping Up

A two-step migration offers a programmatic approach to unlock the potential of the cloud quickly. You’ll experience immediate gains from a tactical move to the cloud and long-term benefits from a strategic cloud transformation that follows. Microsoft programs like AMP, combined with over 200+ Azure services, make this approach viable. If you’re interested in learning more about how you can get started with AMP, and which migration approach makes the most sense for your business goals, reach out to AIS today.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO TRANSFORMATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.