Lift n Shift Approach to Cloud Transformation

What does it mean to Rehost an application?

Rehosting is an approach to migrating business applications hosted in on-premises data center environments to the cloud by moving the application “as-is,” with little to no changes to the business functions performed by the application. It’s a faster, less resource-intensive migration approach that gets your apps into the cloud without much code modification. It is often a good first step to cloud transformation.

Organizations with applications that were initially developed for an on-premises environment commonly look to rehosting to take advantage of cloud computing benefits. These benefits may include increased availability and networking speeds, reduced technical debt, and a pay-per-usage cost structure. When defining your cloud migration strategy, it’s essential to analyze all migration approaches, such as re-platforming, refactoring, replacing, and retiring.

Read More…

End of Support is Coming

End of support for Windows Server 2008 and 2008 R2 is rapidly approaching. On January 14th, 2020 support for Windows Server 2008 and 2008 R2 will end; support for SQL Server 2008 and 2008 R2 already completed on July 9th, 2019.

Window Server Risks

What does this mean for my organization?

End of support means the end of monthly security updates and support from Microsoft. Without Microsoft’s regular security updates and patches to protect your environment, you expose your applications and data running on the platform to several risks. These risks may include the potential for security breaches, attacks, and compliance failure for important regulations such as GDPR, HIPAA, PCI, Sarbanes-Oxley, FedRAMP, and others. Read this datasheet for more details.

The requirements for maintaining compliant IT workloads vary depending on the regulation, but almost all of them forbid the use of unsupported software. Even if unsupported software is not officially prohibited, most compliance initiatives require the prompt performance of security patching. With this in mind, it’s particularly difficult for an organization to justify using software for which patches are no longer being created. Perhaps the most critical reason for IT professionals to migrate away from Windows Server 2008 and SQL Server 2008 before their end of life date, is that doing so is a matter of self-preservation.

The risks of not upgrading

Neglecting an end of life scenario can save a bit of money upfront; however, the risks associated with ignoring the end of support are far costlier. These issues vary in severity and can be anything – a security breach, an unfamiliar error message, or perhaps a compatibility problem. IT professionals don’t want to be in a situation where they need to explain to management that an issue has occurred, and can’t be addressed, because the workload impacted runs on unsupported software.

We understand that upgrading to a newer version of Windows Server and SQL server can be challenging and requires validation work. However, if your organization isn’t already acting on a plan to migrate and modernize your infrastructure before the end of support, you’re already behind.

Time to modernize

End of support is an ideal time to transform your IT platform and move your infrastructure and applications to the cloud. Nevertheless, it can be difficult to upgrade everything before the end of support deadlines. You cannot wait months and years or dedicate your IT organization to spend time upgrading your critical end of support IT infrastructure.

So how do you quickly ensure you can avoid potential critical security and compliance interruptions? What are my choices from here?

Move your servers to Azure as-is

The good news is Microsoft announced that Extended Security Updates would be available, for FREE, in Azure for 2008 and 2008 R2 versions of Windows Server and SQL Server. This support will be available for three more years after the end of support deadline. Despite this, organizations with the end of support technologies need a quick solution for migrating their IT infrastructure to Azure. Organizations must remain secure and compliant without taking months or years to create a strategic cloud transformation plan.

We often see the struggle to balance these two competing needs with large enterprise organizations who are faced with a myriad of legacy technologies and the pressure to modernize. The options are plentiful, the current infrastructure is complex, and decisions aren’t easy. This common scenario made us rethink how we can approach modernization, both quickly and strategically. Instead, address the immediate need to move out of a data center or end of support technology while working towards a well-thought-out cloud transformation roadmap. AIS CTO Vishwas Lele details this Two-Step approach to Cloud Transformation Journey using a Tactical “Lift-n-Shift” approach to rehost infrastructure on Azure.

Step 1: Move your end of support infrastructure into Azure as-is

Migrate your Windows Server and SQL Server applications to Microsoft Azure and breathe new life into your server infrastructure. The first step of this two-step approach perfectly aligns with the needs of migrating end of support workloads to Azure with minimal to no changes to your existing footprint (and near-zero downtime).

This positions you to:

  • Immediately meet deadlines for the end of support
  • Remain secure and compliant with critical business & industry regulations
  • Quickly leverage Azure capabilities (giving you tangible benefits)
  • Generate lasting cost-savings with Microsoft’s financially attractive ability to port your existing licenses to Azure

Some organizations shy away from a Lift-n-Shift approach. On the surface, it may seem wasteful, as we are duplicating your current footprint in Azure. However, by completing this effort in weeks, not months or years, duplication is minimized. Pair this with AIS’s FinOps methodology for cloud financial management best practices and significant savings can be achieved by moving your servers to an Azure-optimized infrastructure. By comparison, running your Windows Servers in AWS could be as much as 5 times more expensive to run Windows Server.

Step 2: Application innovation and modernization

Once you’ve started moving your on-premises infrastructure to the cloud, the modernization efforts begin, and a whole new world of opportunities to transform is realized. Even the modernization of your legacy applications can be accelerated by embracing the services of Azure cloud.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

AIS has you covered with the migration of your infrastructure to Azure

With just a few months left for the Windows Server End of Support deadline (with the SQL deadline already passed), updating your IT infrastructure must be a priority to avoid business disruption. Even with standardized processes and years of experience, deploying new versions of Windows and SQL Server is no small task in the enterprise.

Our experts have Azure covered so you can focus on doing business. AIS can help you jumpstart this process with a comprehensive cloud migration assessment. Our program gives you flexibility in gauging readiness to leverage cloud technology for your servers. By using machine learning and data collection, we can provide you a portfolio inventory, data-driven insights, and recommendations to aid in defining your migration strategy. Also, we’ll provide detailed economic costs to run your servers in the cloud. You’ll have a clear line of sight into the cost of running your servers in the cloud, as well as a clear roadmap for migration.

With this assessment, we can quickly prepare your cloud infrastructure and to begin migrating servers to an environment that’s scalable and secure. We can get you migrated soon with our extensive experience and expertise.

Start you Azure migration planning today!

The time to act is now! While most coverage surrounding the end of support appears to emphasize the negative aspects, organizations that approach the situation through the right lens stand to reap the benefits of modernization.

Part of this approach requires organizations to choose a trusted and capable partner with the experience and skillsets to ensure a successful migration. With the impending deadlines quickly approaching, it’s time to take action.

Let AIS accelerate your end of support migration to Azure, starting with a cloud migration assessment, followed up a roadmap and the execution of an expert migration strategy.

GET AN ASSESSMENT OF YOUR WINDOWS SERVER 2008 WORKLOADS

Some Updates for Global Azure Virtual Network (VNet) Peering in Azure

Last year, I wrote a blog post discussing Global VNet Peering in Azure to highlight the capabilities and limitations. The use of global peering at that time was significantly different in capability from local peering and required careful consideration before including in the design. Microsoft is continually adding and updating capabilities of the Azure platform, and the information from my original post requires updates to describe the current state of VNet peering.

The virtual networks can exist in any Azure public cloud region, but not in Azure national clouds.

Update – Global VNet peering is now available in all Azure regions, including Azure national clouds. You can create peering between VNets in any region in Azure Government, and peering can exist between US DoD and US Gov regions. The peering can span both regions and subscriptions.

Azure's Global Footprint
The above image shows Azure regions and the global footprint.

In Azure commercial, a peering can also be created between VNets in different Azure Active Directory tenants using PowerShell or command-line interface (CLI). This requires configuring an account with access in both tenants with at least the minimum required permissions on the VNets (network contributor role). In Azure Government, this is not currently possible and peered VNets must exist under the same Azure Active Directory tenant.

Resources in one virtual network cannot communicate with the IP address of an Azure internal load balancer in the peered virtual network.

Update – This limitation existed with the available load balancer at that time. Load balancers are now available in “Basic” and “Standard” tiers. The Basic load balancer is not accessible from a globally peered VNet. The “Standard” load balancer is accessible across global peering and has other additional features. A design can generally be adapted to replace Basic load balancers with Standard load balancers in high availability deployments where implementing global peering. Basic load balancers are a free resource. Standard load balancers are charged based on the number of rules and data processed.

Several Azure services also utilize a Basic load balancer and are subject to the same constraints. Please verify that the resources you are using for your specific design are supported.

You cannot use remote gateways or allow gateway transit. To use remote gateways or allow gateway transit, both virtual networks in the peering must exist in the same region.

Update – This is no longer accurate. A globally peered VNet can now use a remote gateway.

Transferring data between peered VNets does incur some cost. The cost is nominal within the same region. Cost may become significant when moving between regions and through gateways.

In summary, there have been significant updates to Global VNet Peering since my original post. The current capability now more closely aligns with local peering. These changes simplify network connectivity between regions, and the inclusion of multi-region redundancy makes disaster recovery more feasible.

Improve Networking and Connectivity in Your Environment. Contact AIS Today to Discuss Your Options.

Kubernetes logoIf you’ve worked in software development or IT for any amount of time, chances are you’ve at least heard about containers…and maybe even Kubernetes.

Maybe you’ve heard that Google manages to spin up two billion containers a week to support their various services, or that Netflix runs its streaming, recommendation, and content systems on a container orchestration platform called Titus.

This is all very exciting stuff, but I’m more excited to write and talk about these things now more than ever before, for one simple reason: We are finally at a point where these technologies can make our lives as developers and IT professionals easier!

And even better…you no longer have to be a Google (or one of the other giants) employee to have a practical opportunity to use them.

Containers

Before getting into orchestrators and what they actually offer, let’s briefly discuss the fundamental piece of technology that all of this is depends on – the container itself.

A container is a digital package of sorts, and it includes everything needed to run a piece of software.  By “everything,” I mean the application code, any required configuration settings, and the system tools that are normally brought to the table by a computer’s operating system. With those three pieces, you have a digital package that can run a software application in isolation across different computing platforms because the dependencies are included in that package.

And there is one more feature that makes containers really useful – the ability to snapshot the state of a container at any point. This snapshot is called a container “image.” Think of it in the same way you would normally think of a virtual machine image, except that many of the complexities of capturing the current state of a full-blown machine image (state of the OS, consistency of attached disks at the time of the snapshot, etc.) are not present in this snapshot.  Only the components needed to run the software are present, so one or a million instances can be spun-up directly from that image, and they should not interfere with each other.  These “instances” are the actual running containers.

So why is that important? Well, we’ve just alluded to one reason: Containers can run software across different operating systems (various Linux distributions, Windows, Mac OS, etc.).  You can build a package once and run it in many different places. It should seem pretty obvious at this point, but in this way, containers are a great mechanism for application packaging and deployment.

To build on this point, containers are also a great way to distribute your packages as a developer.  I can build my application on my development machine, create a container image that includes the application and everything it needs to run, and push that image to a remote location (typically called a container registry) where it can be downloaded and turned into one or more running instances.

I said that you can package everything your container needs to run successfully, but the last point to make is that the nature of the container package gives you a way to enforce a clear boundary for your application and a way to enforce runtime isolation. This feature is important when you’re running a mix of various applications and tools…and you want to make sure a rogue process built or run by someone else doesn’t interfere with the operation of your application.

Container Orchestrators

So containers came along and provided a bunch of great benefits for me as a developer.  However, what if I start building an application, and then I realize that I need a way to organize and run multiple instances of my container at runtime to address the expected demand?  Or better yet, if I’m building a system comprised of multiple microservices that all need their own container instances running?  Do I have to figure out a way to maintain the desired state of this system that’s really a dynamic collection of container instances?

This is where container orchestration comes in.  A container orchestrator is a tool to help manage how your container instances are created, scaled, managed at runtime, placed on underlying infrastructure, communicate with each other, etc.  The “underlying infrastructure” is a fleet of one or more servers that the orchestrator manages – the cluster.  Ultimately, the orchestrator helps manage the complexity of taking your container-based, in-development applications to a more robust platform.

Typically, interaction with an orchestrator occurs through a well-defined API, and the orchestrator takes up the tasks of creating, deploying, and networking your container instances – exactly as you’ve specified in your API calls across any container host (servers included in the cluster).

Using these fundamental components, orchestrators provide a unified compute layer on top of a fleet of machines that allows you to decouple your application from these machines. And the best orchestrators go one step further and allow you to specify how your application should be composed, thus taking the responsibility of running the application and maintaining the correct runtime configuration…even when unexpected events occur.

VIEW OUR AZURE CAPABILITIES
Since 2009, AIS has been working with Azure honing our capabilities and offerings. View the overview of our Azure-specific services and offerings.

Kubernetes

Kubernetes is a container orchestrator that delivers the capabilities mentioned above. (The name “Kubernetes” comes from the Greek term for “pilot” or “helmsman of a ship.”) Currently, it is the most popular container orchestrator in the industry.

Kubernetes was originally developed by Google, based in part on the lessons learned from developing their internal cluster management and scheduling system Borg.  In 2014, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) which open-sourced the project to encourage community involvement in its development. The CNCF is a child entity of the Linux Foundation and operates as a “vendor-neutral” governance group. Kubernetes is now consistently in the top ten open source projects based on total contributors.

Many in the industry say that Kubernetes has “won” the mindshare battle for container orchestrators, but what gives Kubernetes such a compelling value proposition?  Well, beyond meeting the capabilities mentioned above regarding what an orchestrator “should” do, the following points also illustrate what makes Kubernetes stand out:

  • The largest ecosystem of self-driven contributors and users of any orchestrator technology facilitated by CNCF, GitHub, etc.
  • Extensive client application platform support, including Go, Python, Java, .NET, Ruby, and many others.
  • The ability to deploy clusters across on-premises or the cloud, including native, managed offerings across the major public cloud providers (AWS, GCP, Azure). In fact, you can use the SAME API with any deployment of Kubernetes!
  • Diverse workload support with extensive community examples – stateless and stateful, batch, analytics, etc.
  • Resiliency – Kubernetes is a loosely-coupled collection of components centered around deploying, maintaining and scaling workloads.
  • Self-healing – Kubernetes works as an engine for resolving state by converging the actual and the desired state of the system.

Kubernetes Architecture

A Kubernetes cluster will always include a “master” and one or more “workers”.  The master is a collection of processes that manage the cluster, and these processes are deployed on a master node or multiple master nodes for High Availability (HA).  Included in these processes are:

  • The API server (Kube-apiserver), a distributed key-store for the persistence of cluster management data (etcd)
  • The core control loops for monitoring existing state and management of desired state (Kube-controller-manager)
  • The core control loops that allow specific cloud platform integration (Cloud-controller-manager)
  • A scheduler component for the deployment of Kubernetes container groups, known as pods (Kube-scheduler)

Worker nodes are responsible for actually running the container instances within the cluster.  In comparison, worker nodes are simpler in that they receive instructions from the master and set out serving up containers.  On the worker node itself, there are three main components installed which make it a worker node in a Kubernetes cluster: an agent called kubelet that identifies the node and communicates with the master, a network proxy for interfacing with the cluster network stack (kube-proxy), and a plug-in interface that allows kubelet to use a variety of container runtimes, called the container runtime interface.

diagram of Kubernetes architecture

Image source

Managed Kubernetes and Azure Kubernetes Service

“Managed Kubernetes” is a deceptively broad term that describes a scenario where a public cloud provider (Microsoft, Amazon, Google, etc.) goes a step beyond simply hosting your Kubernetes clusters in virtual machines to take responsibility for deploying and managing your cluster for you.  Or more accurately, they will manage portions of your cluster for you.  I say “deceptively” broad for this reason – the portions that are “managed” varies by vendor.

The idea is that the cloud provider is:

  1. Experienced at managing infrastructure at scale and can leverage tools and processes most individuals or companies can’t.
  2. Experienced at managing Kubernetes specifically, and can leverage dedicated engineering and support teams.
  3. Can add additional value by providing supporting services on the cloud platform.

In this model, the provider does things like abstracting the need to operate the underlying virtual machines in a cluster, providing automation for actions like scaling a cluster, upgrading to new versions of Kubernetes, etc.

So the advantage for you, as a developer, is that you can focus more of your attention on building the software that will run on top of the cluster, instead of on managing your Kubernetes cluster, patching it, providing HA, etc. Additionally, the provider will often offer complementary services you can leverage like a private container registry service, tools for monitoring your containers in the cluster, etc.

Microsoft Azure offers the Azure Kubernetes Service (AKS), which is Azure’s managed Kubernetes offering. AKS allows full production-grade Kubernetes clusters to be provisioned through the Azure portal or automation scripts (ARM, PowerShell, CLI, or combination).  Key components of the cluster provisioned through the service include:

  • A fully-managed, highly-available Master. There’s no need to run a separate virtual machine(s) for the master component.  The service provides this for you.
  • Automated provisioning of worker nodes – deployed as Virtual Machines in a dedicated Azure resource group.
  • Automated cluster node upgrades (Kubernetes version).
  • Cluster scaling through auto-scale or automation scripts.
  • CNCF certification as a compliant managed Kubernetes service. This means it leverages the Cloud-controller-manager standard discussed above, and its implementation is endorsed by the CNCF.
  • Integration with supporting Azure services including Azure Virtual Networks, Azure Storage, Azure Role-Based Access Control (RBAC), and Azure Container Registry.
  • Integrated logging for apps, nodes, and controllers.

Conclusion

The world of containers continues to evolve, and orchestration is an important consideration when deploying your container-based applications to environments beyond “development.”  While not simple, Kubernetes is a very popular choice for container orchestration and has extremely strong community support.  The evolution of managed Kubernetes makes using this platform more realistic than ever for developers (or businesses) interested in focusing on “shipping” software.

As organizations increase their footprint the cloud, there’s increased scrutiny on mounting cloud consumption costs, reigniting a discussion about longer-term costs.

This is not an entirely unexpected development. Here’s why:

  1. Cost savings were not meant to be the primary motivation for moving to the cloud – At least not in the manner most organizations are moving to the cloud – which is to move their existing applications with little to no changes to the cloud. For most organizations, the primary motivation is the “speed to value,” aka the ability to offer business value at greater speeds by becoming more efficient in provisioning, automation, monitoring, the resilience of IT assets, etc.
  2. Often the cost comparisons between cloud and on-premises are not a true apples-to-apples comparison – For example, were all on-premises support staff salaries, depreciation, data center cost per square foot, rack space, power and networking costs considered? What about troubleshooting and cost of securing these assets?
  3. As these organizations achieve higher cloud operations maturity, they can realize increased cloud cost efficiency – For instance, by implementing effective auto-scaling, optimizing execution contexts by moving to dynamic consumption plans like serverless, take advantage of discounts through longer-term contracts, etc.

Claim Your Free Whitepaper

In this whitepaper, we talk about the aforementioned considerations, as well as cost optimization techniques (including resource-based, usage-based and pricing-based cost optimization).

FREE WHITEPAPER ON AZURE COST MANAGEMENT: BACKGROUND, TOOLS, AND APPROACHES

Azure Web Apps Background

I’ve been working with Azure Web Apps for a long time. Before the launch of Azure Web Apps for Containers (or even Azure Web App on Linux), these web apps ran on Windows Virtual Machines managed by Microsoft. This meant that any workload running behind IIS (i.e., ASP.Net) would run without hiccups — but that was not the case with workloads which preferred Linux over Windows (i.e., Drupal).

Furthermore, the Azure Web Apps that ran on Windows were not customizable. This meant that if your website required a custom tool to work properly, chances are it was not going to work on an Azure Web App, and you’d need to deploy a full-blown IaaS Virtual Machine. There was also a strict lockdown regarding tools and language runtime versions that you couldn’t change. So, if you wanted the latest bleeding-edge language runtime, you weren’t gonna get it.

Azure Web Apps for Containers: Drum Roll

Last year, Microsoft released the Azure Web Apps for Containers or Linux App Service plan offering to the public. This meant we could build a custom Docker image containing all the binaries and files, and then deploy it on the PaaS offering. After working with the product for some time, I was like..

The product was excellent, and it was clear that it had potential. Some of  the benefits:

  • Ability to use a custom Docker image to run the Web App
  • Zero headaches from managing Docker containers
  • The benefits of Azure Web App on Windows like Backups, Kudu, Deployment Slots, Autoscaling (Scale up & Scale out), etc.

Suddenly, running workloads that preferred Linux or required custom binaries became extremely easy.

The Architecture

Compared to Azure Web App on Windows, the architecture implemented in Azure Web App for Containers is different.

diagram of Azure web apps architecture

Each of the above Web Apps is strictly locked down with minimal possibility of modification. Furthermore, the backend storage was based on Network File Shares which means that even if you don’t want any storage (like in cases when your app simply reads data from the database and displays it back), the app would still perform slowly.

diagram of Azure web apps architecture

The major difference is that the Kudu/SCM site runs in a separate container from the actual web app. Both containers are connected to each other with a private network. In this case, each App Service Plan is deployed on a separate Virtual Machine and all the plumbing is managed by Microsoft. The benefits of this approach are:

  • Better isolation. If your Kudu is experiencing issues, it reduces the chance of taking down your actual website.
  • Ability to customize the actual web app container running the website.
  • Better resource utilization

Stay tuned for the next part in which I would be discussing the various options related to Storage which are available in Azure Web App for Containers and their trade-offs.

Happy holidays!

Microsoft US SI of the Year Award at Microsoft Inspire
AIS won the 2018 Microsoft US SI of the Year award for Azure Performance at Microsoft Inspire in Las Vegas. The award recognizes AIS’ work in Azure consumption values, as well as our success as the #1 United States Co-Sell Partner in the Microsoft Co-Sell Initiative. With over $26 million in Azure consumption and over $12 million in total contract value, AIS assisted Microsoft in retiring over $1 million in Azure goals.

Microsoft generated more than 11,000 co-sell wins with partners like AIS during the past 12 months, equating to roughly $5 billion in contract value through the channel. The figures are the result of Microsoft’s newly-formed One Commercial Partner (OCP) roll-out, designed to drive deeper collaboration between internal direct sellers and partners.

Microsoft described the OCP-driven co-sell program as the “largest sales transformation” in decades.

“In less than a year, AIS partnered with the OCP team to conceive and deliver our co-sell offerings with market-leading results,” said Larry Katzman, AIS Vice President of Marketing and Sales. “We leveraged our Cloud Adoption Framework, which is a collection of services we’ve delivered multiple times while helping our clients adopt Azure. We also included our Legacy Modernization offerings.”

AIS will be expanding our co-sell offerings to include our Office 365 and Dynamics 365 adoption programs in the coming year,

“Congratulations to the OCP Team and the AIS Marketing and Sales Teams for turning the OCP vision into a reality so quickly,” said Tom O’Connell, AIS Managing Partner. “This is only the beginning. We built a solid pipeline and see even better results in FY19.”

AIS Team Members Accepting Award

2018 Microsoft US SI of the Year Award

We can do this for you too! Check out our Azure QuickStart offering here.

Azure Data Lake logoFirst Things First…What’s a Data Lake?

If you’re not already familiar with the term, a “data lake” is generally defined as an expansive collection of data that’s held in its original format until needed. Data lakes are repositories of raw data, collected over time, and intended to grow continually. Any data that’s potentially useful for analysis is collected from both inside and outside your organization, and is usually collected as soon as it’s generated. This helps ensure that the data is available and ready for transformation and analysis when needed. Data lakes are central repositories of data that can answer business questions…including questions you haven’t thought of yet.

Azure Data Lake

Azure Data Lake is actually a pair of services: The first is a repository that provides high-performance access to unlimited amounts of data with an optional hierarchical namespace, thus making that data available for analysis. The second is a service that enables batch analysis of that data. Azure Data Lake Storage provides the high performance and unlimited storage infrastructure to support data collection and analysis, while Azure Data Lake Analytics provides an easy-to-use option for an on-demand, job-based, consumption-priced data analysis engine.

We’ll now take a closer look at these two services and where they fit into your cloud ecosystem. Read More…

For the latest updates, check out my Global VNet Peering in Azure blog posted 8/9/19.

First announced as a public preview in September 2017, Global VNet Peering is now generally available in all Azure public regions.

Similar to virtual network peering within the same Azure region, Global VNet Peering now lets you seamlessly connect virtual networks in different Azure regions. The connectivity between the peered virtual networks is routed through the Microsoft backbone infrastructure through private IP addresses. VNet peering provides virtual network connectivity without gateways, additional hops, or transit over the public internet. Global VNet Peering can simplify network designs which have cross-regional scenarios for data replication, disaster recovery, and database failover.

While similar, peering within the same region and peering across regions have unique constraints.  These are clearly identified in the Microsoft documentation, so check that out before you get started. Read More…