Power Platform is Microsoft’s newest pillar in its cloud platform stack, steadily growing in popularity as more and more organizations realize the capability of the tool. Put into the hands of experienced developers, the Platform can expedite the development of highly complex organizational applications. Provided to citizen developers, the Platform connects them to organizational data and allows them to automate personal and team workloads. At an organizational level, IT has insight into who is doing what where and for whom – and the ability to secure, manage, and govern it. It is a first-class platform.

So, why has it taken so long to be recognized as such? Part of the problem is a misunderstanding around what “low code” means.

When you hear the phrase “low code,” you shouldn’t be thinking “low thought” or “low quality.” Rather, think of low code as short-hand coding. The team at Microsoft has done a large chunk of the pre-work by setting up your Azure database (that’s right – the Power Platform is built on Azure!) so that you can focus more on the parts of your solution that are unique to your organization and less about the base architecture that is consistent between solutions.

The Platform offers several drag-and-drop features, many templates, and hundreds of premade data connectors you can utilize to get apps into minimum viable product (MVP) shape quickly. It also lets you use traditional coding to beef up these apps, allowing highly customizable experiences designed to fit your precise business use case. It democratizes the ability to create apps by having both ends of the spectrum covered, allowing a community to exist globally (and organizationally) between citizen developers who just want to make their jobs a little easier and professional developers whose full-time job is building.

It is this unique feature – the diverse community the Platform is capable of supporting – that really changes the game. The Platform is allowing organizations to rapidly scale their modernization efforts by leveraging resources both in and beyond their IT shops.

The base of most Power Platform solutions is built within the Common Data Service (CDS) – a data model that shapes your model-driven apps, is fed by your canvas apps, and provides a wealth of information to reports published with Power BI. Organizations can deploy multiple solutions to an environment’s CDS, meaning that you can package up a “base solution” – a solution that sets up the common entities that make up the core of your enterprise – and deploy it to all of your Platform environments as a starting point for your developers. Then each app can be packaged up as its own solution and stacked on top of this base, modifying it as needed and even furthering the expediency at which a new solution can be deployed.

The CDS not only contains the data model but also stores the data itself. That means that when you build multiple apps in one environment, you’re creating a single source of truth for all of the applications touching that CDS; pull out processes and data points specific to one business unit but allow them to share the data with other units in the background. Connect your CDS-built apps to other data sources (Azure SQL database, Outlook, or even competing software like SalesForce) using the hundreds of existing data connectors, or create custom connectors, to further utilize existing data and reduce duplicative sources (and the source control issues that go along with it).

Common Data Service

Expanding on the need for control of your organization’s data, the Power Platform offers a familiar experience for the system administrators of many organizations by using Azure Active Directory to authenticate users and the O365 Admin Center to manage licenses. Further, the management of the Platform is fortified by the installation and use of the (free!) Center of Excellence Starter Kit. This toolkit provides a starting point to gain organizational insight into the details of how the Platform is being used – locations, environments, flows, makers, connections, apps, and more can be administered with the kit. Drill into each application, tag it, track it, and monitor its usage. Customize the COE Starter Kit and bring a whole new light to what once might have been considered “rogue IT.”

So now that we understand that low code is a good thing, let’s recap what you’re getting with the Power Platform:

  • A way to maximize your resources – the low code features enable citizen developers to migrate their own work, and the pre-built features and “short-hand” coding allow your professional developers to focus on the most important and unique features of your organization’s workloads.
  • A system meant to scale – built on Azure and backed by Microsoft SLAs, this tool is meant for enterprise use and has the speed and reliability to back it. Enable your citizen developers to modernize their own workloads to speed up your cloud overhaul. Develop & leverage a custom, organization-wide base solution to start every new project with a bulk of known entities already installed and scale your implementations even faster.
  • A new level of visibility into your business – the Center of Excellence toolkit allows you to see all of the makers who are building, the apps they’re developing, the connections being made, and more, providing an entirely new window into (and set of control over) citizen development that might have once been seen as “rogue IT.”

To paraphrase Andrew Welch, Director of Business Applications at AIS and Principal Author of the PPAF, on the subject: some business challenges can be solved with custom software development. Some business challenges can be solved with COTS solutions. Everything else is a missed opportunity to transform and modernize in the cloud – for which low-code cloud transformation with the Power Platform is the answer.

AIS is a Microsoft Partner, the premier developer of customer Power Platform solutions, and the publisher of the Power Platform Adoption Framework.

READY TO START YOUR JOURNEY?
AIS can help you when beginning your enterprise adoption journey with Power Platform.

If you are like me, you have used cloud services in a limited fashion to create VM’s for testing or perhaps you have used them extensively. You’d also like to gain an understanding of the broader group of services offered by cloud providers. In my situation, this was due to the recent attainment of an Engagement Manager position and my desire to help AIS expand our business through the development of new opportunities. I realized that I needed to have at least a top layer understanding our offerings in order to realize potential use cases AIS could present to solve problems, more cost-effective options to current solutions, and develop completely new solutions to improve client business. It was obvious to start with Microsoft’s Azure and Amazon’s AWS platforms, being that these are the top focus of AIS and the industry as a whole.

What was not obvious, was where to start. Both platforms are not only extremely broad but also moving targets. I needed to find a way to dip into this process without drowning in all the information, in addition to holding the responsibilities in my day job. I looked at classroom training options, YouTube videos, and continued researching until I stumbled upon two paths. These paths not only provided a nice prepackaged set of materials, but I could complete at my own pace, at home, and they resulted in certifications. I will get to the details, but first a word about certifications.

I am sure many of you will be rolling your eyes when you read the “certifications” aspect of that second to the last sentence. Yes, certifications are not as valuable an indicator of a person’s skills and knowledge in an area as real-world experience. However, they provide the following benefits in order of least to most important:

  1. Provide a good starting point for someone that has no current projects in an area.
  2. Fill knowledge gaps that even a person with experience in an area has, especially in those services or techniques that are not used often.
  3. Provide value to AIS in maintaining various statuses.
  4. Provide a potential client with proof that you at least have an understanding of the basics.
  5. Most importantly, they result in a $500 bonus from AIS, and reimbursement of testing and training costs!

The paths I found are the Microsoft Azure Fundamentals learning path and certification and the Amazon AWS Cloud Practitioner training and certification. The training for both of these includes videos with the Azure path including an estimated ten hours of content and the AWS training about five hours. The Azure path estimates were spot on, and the AWS training took a bit longer, due to my complete lack of experience with the platform.

Microsoft Azure Fundamentals

This path included videos, reading, hands-on experience, and quick knowledge checks. It can be completed with an Azure account that you create just for the training or an account linked to the AIS subscription if you have one. Both the reading and videos provide just enough information, but not get bogged down in the minutia. The only thing I had done with Azure prior to the training created a few VM’s to set up SharePoint environments. I had done that years ago, but I didn’t do that much within those environments.

For me, most of the content was new. I believe if I had a more in-depth experience, the training would have filled in gaps with specific details.

These were the topics I found either completely new or helpful in understanding how to look at and/or pitch Azure services to clients:

  • Containers, app services, and serverless options and how they work
  • Reducing latency with the traffic manager
  • Azure policies and tags to enforce standards
  • Review of data centers, region pairs, geographies, availability zones
  • Various was to predict costs and manage costs such as calculators, Cost Manager, and Azure Advisor

The training took me probably two-thirds of the estimated time, after which I went through the knowledge checks for each section once more. After that, I spent maybe an hour reviewing some things from the beginning. From there, I took an exam and passed. The exam process was interesting and can be done from home with some software that enables someone to watch you. Prior to the exam, you are required to show the person the entire room and fix anything that might enable you to cheat.

After I completed the certification process, I submitted the cost of the exam ($100) as an expense as well as submitted my request for a certification bonus. I received both in a timely manner. See links at the end of this post for materials concerning reimbursements and bonuses. Don’t forget approval from your EM/AE prior to incurring any costs for which you might want reimbursement and to submit your updated certifications spreadsheet to the AIS PI Team.

AWS Cloud Practitioner

This path exclusively contains videos. In my opinion, the content is not as straight forward as the Azure Fundamentals content and the videos cannot be sped up, which can be very frustrating. The actual content was a bit difficult to find. I have provided links at the conclusion of this post for quick reference. Much of the video content involves Linux examples, so Putty and other command-line tools were used. This added a further layer of complexity that I felt took away from the actual content (do I really need to know how to SSH into something to learn about the service?).

As far as content, everything is video, there is no reading, hands-on examples, and knowledge checks. I felt the reading in the Azure path broke things up. The hands-on exercises crystalized a few things for me, and the knowledge checks ensured I was tracking. I would like to see Amazon add some of these things. That being said, the videos are professionally done and included helpful graphics. With zero experience with AWS, I am still finding that I am able to grasp concepts and the videos do a decent job of presenting use cases for each service.

My biggest complaint is the inability to speed up videos that are obviously paced for the lowest common denominator and I find admittedly ADD attention waning often. Something I found that helps is taking notes. This allowed me to listen, write and not get bored.

Amazon provides a list of recommended prep (see links below) that includes self-paced training, a one-day classroom option, exam guide, a list of four base white papers and links to many others, practice exams, as well as a link to the schedule certification exam. I scanned the whitepapers. They all looked like they were useful, but not necessary to knock out the exam. I say this with confidence as I was able to pass the exam without a detailed review of the whitepapers. My technique was to outline the videos, then review them over the course of a couple weeks.

Summary

Whether you are a budding developer or analyst wishing to get a broad overview, a senior developer that wants to fill gaps, or a new EM like me who wants a bit of both, the Microsoft Azure Fundamentals learning track/certification and Amazon AWS Cloud Practitioner training/certification is a good place to start. AIS will cover any costs and provide you with some additional scratch for your effort. Obtaining these certifications also improves AIS standings with providers, clients, and the community as a whole. It also greatly improves your value to clients, meets the criteria of certain AIS Career Paths and Competencies, and who knows, you might learn something!

Links:

  1. Azure Fundamentals Learning Path: https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
  2. Azure Fundamentals Cert: AZ900 Microsoft Azure Fundamentals Exam.
  3. AWS Cloud Practitioner Preparation and Cert:  AWS Recommended Prep
  4. Training Reimbursements and Certification Bonuses: https://appliedis.sharepoint.com/sites/HR/Pages/Additional-benefits.aspx
  5. Submitting certification inventory to PI Team: Reach out to AIS – Process Improvement Team (ais-pi-team@appliedis.com) for more info.
Implementing a cloud strategy and modernization of legacy systems are two of the core objectives of most IT organizations. Purchasing a COTS product or SaaS offering can speed up modernization and come with a lot of advantages. COTS products come with a proven track record and address specific business needs that would be difficult to justify building your own. COTS product shifts the liability of creating features to the COTS product instead of your organization. Finally, COTS products promise a shorter timeframe to implementation. Even though you’ll be purchasing a solution to 80% of your problem with the COTS product, the hardest parts of implementing a COTS product are still your responsibility. Below are some key areas you will still own.

Security and Infrastructure

Security and Infrastructure are your responsibility. Off the shelf product or Software as a Service (SaaS) product won’t address this. If this is a SaaS product, how will your Hybrid network access it and how is that access govern? You’ll need to do a risk assessment of this SaaS product which includes how you connect to it, how it stores its data, and even how it’s creating the software. If this is an off the shelf product, how will it be installed in the cloud? Ask if it can is cloud-native or does it need to run on virtual machines in the cloud. If it’s running on virtual machines are those hardened and who have access to them. Cloud virtual machines complicate networking since they need to be accessed from on-prem and they may still need to reach into the cloud or the internet. That can leave a wide surface vector you’ll need to account for. Security and Infrastructure is the biggest concern and you’ll need to own it.

Automation

One of the promises of moving to the cloud is gaining business agility. Automation is a key component for reaching that goal. SaaS product removes the burden from deploying and maintaining the application, but there may still be a necessity to automate some aspects. For example, if the SaaS product must be configured, it might have a UI and an API for doing this. It’s in your best interest to involve this SaaS Product into your normal development pipeline and consider infrastructure as code as it applies to the SaaS product. If you purchase a COTS produce be sure you can stand up an entire environment including installation of this product with a click of a button. There is no excuse for not automating everything and there are plenty of tools in the Azure DevOps pipeline for integration and automation.

Integration

COTS product provides 80% of the functionality needed, but what about the other 20%? The remaining functionality the product doesn’t provide is likely what differentiates your company from the others. There are always edge cases and custom functionalities that you’ll want, that either the vendor does not provide or it’s expensive for the vendor to implement. You’ll need to bring that work in-house or hire a system integrator to hook things together. Whether it’s a SaaS product or a COTS product, integration to applications that provide the remainder of your functionally is your responsibility as well as understanding how integration will work when purchasing the COTS product. A good COTS product will have a solid HTTP REST API. If the product you’re purchasing doesn’t have a simple API, consider a wrapper to make the integration easier. API Management is an Azure service that can do that translation for you. You might find that you want to combine COTS API to one that makes sense to the rest of the systems. Large COTS products should also support messaging of some kind. Messaging helps build loosely coupled components with high cohesion. COTS products might offer file-based and database integration. However, these should be avoided. Integration of the COTS product is your responsibility and the effort can equal the implementation effort of the COTS product itself.

Conclusion

COTS products can provide great benefits to your company and welcome new functionality quickly. Understand that your IT department will still have to drive the overall architecture and you are responsible for everything around the COTS product. The bulk of this work will fall under Security, Infrastructure, Automation, and Integration. Focus on these concerns and you’ll have a successful implementation.

When Microsoft introduced pipelines as part of their Azure DevOps cloud service offering, we received the tools to add continuous integration (CI) and continuous delivery (CD) practices to our development processes. An Azure DevOps pipeline can be created in two ways: 1) The current generally available “classic” pipeline tooling, and 2) the new multi-stage YAML pipeline feature which is currently in preview.

Classic Pipelines

Classic pipelines achieve CI through Azure DevOps build pipelines. A build pipeline executes before a developer integrates code changes into a code base. The pipeline does things like execute a build task, run the unit tests and/or run static code analysis. It then either accepts or rejects the new changes based on the outcome of these tasks.

CD is achieved through Azure DevOps release pipelines. After the build pipeline has produced a build artifact, a release pipeline will publish the artifact to various environments for manual functional testing, user experience testing and quality assurance. When testers have thoroughly tested the deployed artifacts, the release pipeline can then push the artifacts to a production environment.

As powerful as these classic CI/CD pipelines are, they do have their drawbacks. Firstly, the tooling for creating build and release pipelines does not provide a unified experience. CI pipelines provide an intuitive GUI to create and visualize the integration steps…

Classic build pipeline editor

… and also allow you to define those very same steps in YAML:

YAML build pipeline editor

Release pipelines also provide a GUI to create and visualize the pipeline steps. The problem is that the interface is different from that of the build pipeline and does not allow YAML definitions:

Classic release pipeline editor

Multi-stage Pipelines

To resolve these discrepancies Microsoft introduced multi-stage pipelines. Currently in preview, these pipelines allow an engineer to define a build, release or a combined build and release pipeline as a single YAML document. Besides the obvious benefits gained through a unified development experience, there are many other good reasons to choose YAML over classic pipelines for both your builds and releases.

Since you commit YAML definitions directly to source control, you get the same benefits source control has been providing developers for decades. Here are the top 10 reasons (in no particular order) you should choose YAML for your next Azure DevOps pipeline:

1. History

Want to see what your pipeline looked like last month before you moved your connection strings to Azure KeyVault? No problem! Source control allows you to see every change ever make to your pipeline since the beginning of time.

2. Diff

Have you ever discovered an issue with your build but not known exactly when it started failing or why? Having the ability to compare the failing definition with the last known working definition can greatly reduce the recovery time.

3. Blame

Similarly, it can be useful to see who committed the bug that caused the failure and who approved the pull request. You can pull these team members into discussions on how best to fix the issue while ensuring that the original objectives are met.

4. Work Items

Having the ability to see what was changed is one thing but seeing why it was changed is another. By attaching a user story or task to each pipeline commit, you don’t need to remember the thought process that went into a particular change.

5. Rollback

If you discover that the pipeline change you committed last night caused a bad QA environment configuration, simply rollback to the last known working version. You’ll have your QA environment back up in minutes.

6. Everything As Code

Having your application, infrastructure and now build and release pipelines as code in the same source control repository gives you a complete snapshot of your system at any point in the past. By getting an older version of your repo, you can easily spin up an identical environment, execute the exact same pipelines and deploy the same code exactly as it was. This is an extremely powerful capability!

7. Reuse and Sharing

Sharing or duplicating a pipeline (or part thereof) is as simple as copy and paste. It’s just text so you can even email it to a colleague if desired.

8. Multiple Engineers

Modern CI/CD pipelines can be large and complex, and more than one engineer might modify the same YAML file, causing a conflict. Source control platforms solved this problem long ago and provide easy to use tools for merging conflicting changes. For better or worse, YAML definitions allow multiple engineers to work on the same file at the same time.

9. Peer Reviews

If application code peer reviews are important, so are pipeline peer reviews. The ability to submit a pull request before bringing in new changes allows team members to weigh in and provides an added level of assurance that the changes will perform as desired.

10. Branching

Have a crazy idea you want to try out? Create a new branch for it and trigger a pipeline execution from that branch. If your idea doesn’t pan out, simply delete the branch. No harm done.

Though still in preview, the introduction of fully text-based pipeline definitions that can be committed to source control provides benefits that cannot be achieved with classic GUI-based definitions, especially for larger organizations. Be sure to consider YAML for your next Azure DevOps pipeline implementation.

ACA Compliance Group needed help streamlining the communications landscape and its fast-growing workforce to collaborate more effectively. AIS recommended starting small with Microsoft Teams adoption and utilizing Microsoft Planner to gain advocates, realize quick wins, and gather insights to guide the larger rollout.

Starting Their Cloud Transformation Journey

The cloud brings many advantages to both companies and their employees, including unlimited access and seamless collaboration. However, to unleash the full power of cloud-based collaboration, a company must select the right collaboration technology that fits their business needs and ensures employees adopt the technology and changes in practices and processes. This ultimately benefits the business through increased productivity and satisfaction.

In early 2019, an international compliance firm with around 800 employees contacted AIS to help migrate multiple email accounts into a single Office 365 (O365) Exchange account. They invited AIS to continue their cloud journey and help them:

  • Understand their existing business processes and pain points across multiple time zones, countries, departments, and teams.
  • Provide their employees with a secure, reliable, and integrated solution to effective communication and collaboration.
  • Increase employee productivity by improving file and knowledge sharing and problem-solving.
  • Reduce cost from licensing fees for products duplicating features already available through the company’s enterprise O365 license.

Kicking Off a Customer Immersion Experience

First, AIS provided a Microsoft Customer Immersion Experience (CIE) demonstration, which served as the foundational step to introduce all O365 tools. After receiving stakeholder feedback, needs, and concerns, we collaboratively determined the best order for rolling out the O365 applications. The client selected to move forward with Microsoft Teams adoption as the first step to implementing collaboration software in the organization.

Pilots for Microsoft Teams Adoption

Next, we conducted a pilot with two departments to quickly bring benefits to the organization without a large cost investment and to gather insights that would inform the overall Teams adoption plan and strategy for the entire organization. We confirmed with pilot study employees that they saw and welcomed the benefits that Microsoft Teams provides, including:

  • Reduced internal emails.
  • Seamless communication and collaboration among (remote) teams/departments.
  • Increased productivity, efficiency, and transparency.
  • Centralized and accessible location for files, documents, and resources in Teams.

The pilot study also found that adopting Microsoft Teams in the organization would require a paradigm shift. Many employees were used to email communication, including sending attachments back and forth that was hard to track. In addition, while some departments had sophisticated collaboration tools, a common collaboration tool across the company did not exist. For web conferencing, for example, different departments preferred different tools, such as GoToMeeting and WebEx, and most of them incurred subscription fees. Employees had to install multiple tools on their computers to collaborate across departmental boundaries.

QUESTIONS ABOUT TEAMS ADOPTION PROCESS?

Embracing Benefits of Microsoft Teams with Organizational Change Management (OCM)

To help employees understand the benefits of Teams, embrace the new tool, and willingly navigate the associated changes. For the organization-wide deployment and Microsoft Teams adoption, we formed a project team with different roles, including: a Project Manager, Change Manager, UX researcher, Business Analyst, and Cloud Engineer. Organizational Change Management (OCM), User Experience (UX), and business analysis were as critical as technical aspects of the cloud implementation.

Building on each other’s expertise, the project team worked collaboratively and closely with technical and business leaders at the company to:

  • Guide communication efforts to drive awareness of the project and support it.
  • Identify levers that would drive or hinder adoption and plan ways to promote or mitigate.
  • Equip department leaders with champions and facilitate end-user Teams adoption best practices.
  • Guide end users on how to thrive using Teams through best practices and relevant business processes.
  • Provide data analytics and insights to support target adoption rates and customize training.
  • Use an agile approach to resolve both technical issues and people’s pain points, including using Teams for private chats, channel messages, and meetings.
  • Develop a governance plan that addressed technical and business evolution, accounting for the employee experience.

Cutting Costs & Boosting Collaboration

At the end of the 16-week engagement, AIS helped the client achieve its goals of enhanced collaboration, cost savings, and 90% Teams use with positive employee feedback. The company was well-positioned to achieve 100% by the agreed-upon target date.

Our OCM approach significantly contributed to our project success, which is grounded in the Prosci ADKAR® framework, a leading framework for change management based on 20 years of research. As Prosci described on their website, “ADKAR is an acronym that represents the five tangible and concrete outcomes that people need to achieve for lasting change”:

  • Awareness of the need for change
  • Desire to support the change
  • Knowledge of how to change
  • Ability to demonstrate skills and behaviors
  • Reinforcement to make the change stick

The OCM designed was to provide busy executives, leaders, and end-users with key support and insights for action to achieve each outcome necessary for Teams adoption efficiently and effectively.

If you would like to participate in a CIE demonstration or learn more about adopting cloud-based collaboration tools and practices in your company, we are here to help!

READ MORE ABOUT OUR SUCCESS WITH
ACA COMPLIANCE GROUP

Rehosting Considerations

What is Rehosting?

Rehosting is an approach to migrating business applications hosted on-premises in data center environments to the cloud by moving the application “as-is,” with little to no changes to the functionality. A common rehosting scenario is the migration of applications that were initially developed for an on-premises environment to take advantage of cloud computing benefits. These benefits may include increased availability, faster networking speeds, reduced technical debt, and a pay-per-use cost structure.

In our experience, rehosting is well suited for organizations under time-sensitive data center evacuation mandates, facing pressure from leadership to migrate, running COTS software that doesn’t support modernization, or those with business-critical applications on end-of-support technologies. These organizations often opt for a rehost then transform strategy, as reviewed in the following blog, Cloud Transformation Can Wait… Get Me to the Cloud Now!

Below we outline important considerations, benefits, and challenges associated with a rehost strategy, based on our real-world experiences moving both custom and packaged commercial on-premises applications to the cloud. We’ll also discuss steps to take when your migration initiative goes beyond rehosting, requiring the assessment of alternative migration approaches, such as re-platforming and refactoring.

Critical Considerations

When moving on-premises business-critical applications to the cloud, there are critical considerations that span technical, operational, and business domains. Below are three key components not to be overlooked when defining your cloud migration strategy:

  • Establishing a shared vision: Ensuring you have set goals and an executive sponsor.
  • Understanding your why: Why are you migrating to the cloud in the first place?
  • Defined business impact: What impact do you expect from your migration efforts and are your goals realistic based on the chosen approach?

Establishing a Shared Vision

Establish a Shared Vision with Stakeholders

The landscape of on-premises systems is often governed by many stakeholders, both business and IT, with competing goals, risk profiles, and expected outcomes from a migration effort. Having a clear vision for your rehost initiative with key roles and responsibilities defined is critical to the timeliness, investment, and overall success of your project. Finding an executive sponsor to unite the various groups, make decisions, and define the business goals and expected outcomes is vital in risk management.

As part of creating this shared vision, the executive sponsor needs to ensure:

  • Goal Alignment: Having a shared vision among various business and IT stakeholders set direction and expectations for the project. A shared vision allows all parties, including vendors and internal resources, to understand the goal(s) and the role they’ll play for the project.
  • Sufficient Budgeting and Resource Allocation: Appropriate budget and resources must be allocated for executing tasks related to this partnership, before the start of the migration effort to ensure timely project completion.
  • Proper Documentation of Existing Systems: Critical information about on-premises systems and operations is often either insufficient or missing entirely. System documentation is mandatory to migrate systems and uphold their intended purpose.
  • Product Ownership: On-premises business application suites are often acquired or internally developed. Original vendors may be out of business, so products are no longer viable. Conversely, the custom product may no longer be supported or understood due to missing source code. An owner needs to be designated to determine the future of the product.
  • Organizational Change Management: Without user adoption, your cloud migration will fail. Change management efforts enable cloud technology adoption and require proper planning and execution from the start.

The considerations outlined above should be discussed up front, and partnerships among stakeholder groups must be established to accomplish the intended goal(s) of migration under executive sponsor leadership.

Understand Your Why

Understand Why You're Moving

You’ve heard the stories about failed cloud migrations. Those stories often start with misaligned expectations and a rushed execution, which are among the top reasons cloud migrations result in a backslide to on-premises environments. Migrating to the cloud isn’t a silver bullet – not every organization will experience cost savings or even immediate functionality improvements from a rehosting project but there are many opportunities for cost avoidance and optimization of spend.

As an IT director or manager, it’s critical to ensure executive-level sponsors understand how different migration approaches align to anticipated outcomes. There’s often pressure from leadership to migrate to the cloud, and understandably with countless cloud benefits and the many challenges associated with aging on-premises solutions. However, understanding and communicating what’s realistic for your organization and how different approaches will address various business goals is crucial.

Data Center Evacuations & Unsupported Technology

Organizations migrating based on a mandated data center evacuation or the security and compliance risks associated with unsupported or end-of-support technology often look to a rehost strategy as a first step. This helps accomplish the business goal of reducing technical debt or remediating compliance concerns quickly.

Reaping the Benefits of Cloud-Native Solutions

There are many other reasons organizations look to the cloud, such as staying competitive, increasing time to value, or the ability to innovate. To fully realize the cloud outcomes that motivate these decisions – including greater flexibility, scalability, data security, built-in disaster recovery options, and improved collaboration – additional planning and refactoring of on-premises applications are often required. In these cases, sometimes we see a rehost as the first stage (as leadership wants to see quick results or has made a public commitment to migrate to the cloud), followed by more advanced modernization efforts.

To get to the root of goals and expectations, consider the following questions as you build your roadmap:

  1. What are your business objectives for cloud adoption, and how will they help further the company vision?
  2. Is there a set timeline to complete the cloud migration effort?
  3. What internal and external resources are available to support a cloud migration?
  4. How many applications are in your portfolio, and do you plan to migrate everything, or are you considering a hybrid model approach?
  5. What are the technical requirements and interdependencies of your applications? How will you assess cloud readiness?
  6. What are the necessary governance, security, and compliance considerations?
  7. Who will be responsible for moving workloads to the new cloud platform? Who will perform the migration, and manage the workloads? Will you be doing it by yourself, or will it be a shared initiative?
  8. How do you intend to use automation to reduce manual efforts and streamline provisioning and governance?

As you answer the questions above, you may find that a rehost effort is sufficient. Likewise, you may choose to explore a lead horse application approach as part of your migration strategy to better understand the value derived from various modernization tactics.

Uncovered Benefits of the Cloud

Uncover Additional Cloud Benefits

If your organization is interested in exploring cloud benefits that go beyond what a rehost effort can provide, migration options that are more involved than rehosting may be worth your consideration. Organizations looking to modernize through re-platforming or refactoring may be motivated by cloud benefits such as:

  • Faster time to market, product release cycles, and/or pace of innovation
  • Enriched customer and end-user experiences
  • Improved employee technology, collaboration, and processes
  • Better reliability and networking speeds
  • Reduced cost of labor and/or maintenance
  • Ability to leverage emerging technology
  • Built-in disaster recovery options
  • Flexibility and scalability
  • Data security
  • Cost allocation for budgeting and showback/chargeback
  • Move from Capex to Opex (or realize Capex by buying resource commitments)

If you are facing tight timelines to migrate, a rehost effort can get you one step closer to realizing the above benefits. Through an initial migration, you can look to a proof of concept to gain a further understanding of the business impact various approaches have to offer while incrementally progressing cloud transformation.

TO LEARN MORE ABOUT THE DIFFERENT APPROACHES TO MIGRATION AND MODERNIZATION, DOWNLOAD OUR FREE WHITEPAPER.

Challenges

While rehosting is a faster, less resource-intensive migration approach and a great first step into the cloud, it comes with challenges and limitations.

The primary limitation of migrating certain on-premises applications to the cloud is the application’s inherent cloud compatibility. Specific applications have internal and external dependencies which limit their ability to take advantage of more advanced cloud benefits once rehosted.

While rehosting allows you to modernize the application environment, resulting in outcomes such as reduced Data Center costs, other cloud benefits aren’t fully realized. Outcomes such as elasticity and the ability to take advantage of cloud-native features are not available with a strictly rehost strategy.

While more cost-effective than on-premises hosting, sometimes it can be more costly to run applications in the cloud when rehosting, versus re-platforming or refactoring without a FinOps strategy to master the unit economics of cloud for competitive advantage. To show fast progress, rehosting is often a great transitional stage for working towards a cost-effective cloud solution, especially for organizations on a tight timeline. During this stage, managing cloud costs and realizing cloud value with a FinOps practice is key.

Feeling Stuck?

Not Sure Where to Start?

If you’re stuck in analysis paralysis, work with a consultant that’s been through various migration projects, from start to finish, and understands the common challenges and complexities of different approaches.

Whether you’re considering Azure, Office 365, Power Platform, or another cloud platform, AIS has a range of Adoption Frameworks and Assessments that can help you understand your options. With our help, create a shared vision, and align business goals to the appropriate migration approaches.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO CLOUD MIGRATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.

Lift n Shift Approach to Cloud Transformation

What does it mean to Rehost an application?

Rehosting is an approach to migrating business applications hosted in on-premises data center environments to the cloud by moving the application “as-is,” with little to no changes to the business functions performed by the application. It’s a faster, less resource-intensive migration approach that gets your apps into the cloud without much code modification. It is often a good first step to cloud transformation.

Organizations with applications that were initially developed for an on-premises environment commonly look to rehosting to take advantage of cloud computing benefits. These benefits may include increased availability and networking speeds, reduced technical debt, and a pay-per-usage cost structure. When defining your cloud migration strategy, it’s essential to analyze all migration approaches, such as re-platforming, refactoring, replacing, and retiring.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

Read More…

Please note: The extended support deadline for Exchange 2010 has changed since the original publication of this article. It has moved from January 14 2020 to October 13 2020. There is still good time to consider your upgrade strategy and migrate your legacy Exchange environments to the Microsoft cloud!

Exchange 2010 is at the end of its journey, and what a long road it’s been! For many customers, it has been a workhorse product facilitating excellent communications with their employees. It’s sad to see the product go, but it’s time to look to the future, and the future is in the Microsoft Cloud with Exchange Online

What does End of Support mean for my organization?

email security exchange 2010 end of support risks

While Exchange 2010 isn’t necessarily vanishing from the messaging ecosystem, support of the product ends in all official capacities on January 14, 2020. Additionally, Office 2010 will be hitting the end of support on October 13, 2020, which means your old desktop clients will also be unsupported within the same year. What this means is that businesses using Exchange and Office applications will be left without support from Microsoft – paid or free. End of support also means the end of monthly security updates. Without regular security updates and patches from Microsoft to protect your environment, your company is at risk.

  1. Security risks – Malware protection and attack surface protection become more challenging as products are off lifecycle support. Any new vulnerability may not be disclosed or remediated.
  2. Compliance risks – As time goes on, organizations must adhere to new compliance requirements – for example, GDPR was a massive recent deadline. While managing these requirements on-premises is possible, it is often challenging and time-consuming. Office 365 offers improved compliance features for legal and regulatory requirements. The most notable is that Microsoft cloud environments comply with most regulatory needs, including HIPAA, FISMA, FedRAMP, and more.
  3. Lack of software and hardware support – Lack of technical support for problems that may occur such as bug fixes, stability and usability of the server, and time zone updates. Dropped support for interoperation with 3rd party vendors like MDM and message hygiene solutions can mean your end-user access can stop working. Not to mention the desktop and mobile mail solutions already deployed, or perhaps being upgraded, around this now decade-old infrastructure.
  4. Speaking of old infrastructure – This isn’t just about applications and services. For continued support and to meet compliance requirements, you must migrate to newer hardware to retain, store, and protect your mailbox and associated data. Office 365 absolves you of all infrastructure storage costs. That is a perfect opportunity, and often a justification in and of itself, to move to the cloud.

It’s time to migrate to Office 365 . . . Quickly!

There are many great reasons to move your exchange environment to a hosted environment in Office 365. The biggest one being your company will no longer have to worry about infrastructure costs.

Here are some other significant advantages that you won’t have to worry about:

  • Purchasing, and maintaining expensive storage and hardware infrastructure
  • Time spent keeping up to date on product, security, and time zone fixes
  • Time spent on security patching OS or updating firmware
  • Cost for licensing OS or Exchange Servers
  • Upgrading to a new version of exchange; you’re always on the latest version of Exchange in Office 365
  • Maintaining compliance and regulations for your infrastructure whether Industry, Regional or Government
  • With an environment of thousands of users and potentially unlimited mailboxes, absolving your admins from the day-to-day database, storage, server, and failover management is a huge relief and cost savings by having your team focus on Exchange administration.

Another big cost for on-premises is storage: data repositories for mailbox data retention, archiving, and journaling. This value cannot be overstated – do you have large mailboxes or archive mailboxes? Are you paying for an archiving or eDiscovery solution? If you consider Exchange Online Plan 2 licenses (often bundled into larger enterprise licenses such as E3 or E5) they allow for archive mailboxes of unlimited size. These licenses also offer eDiscovery and compliance options that meet the needs of complex organizations.

The value of the integrated cloud-based security and compliance resources in the Office 365 environment is immense. Many of our customers have abandoned their entire existing MDM solutions in favor of Intune. Data Loss Prevention allows you to protect your company data against exfiltration. Office 365 Advanced Threat Protection fortifies your environment against phishing attacks and offers zero-day attachment reviews. These technologies are just the tip of the iceberg and can either replace or augment an existing malware and hygiene strategy. And all these solutions specifically relate and interoperate with Exchange Online.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

Some other technologies that seamlessly work with Exchange Online and offer integral protections to that product as well as other Microsoft cloud and SaaS solutions:

  • Conditional Access – Precise, granular access control to applications
  • Intune – Device and application management and protection
  • Azure AD Identity Protection – Manage risk levels for associate activity
  • Azure Information Protection – Classify and protect documents
  • Identity Governance – Lifecycle management for access to groups, roles, and applications

Think outside the datacenter

data center transformation services

There are advantages to thinking outside the (mail)box when considering an Exchange migration strategy to the cloud. Office 365 offers an incredible suite of interoperability tools to meet most workflows. So while we can partner with you on the journey to Office 365, don’t overlook some of the key tools that are also available in the Microsoft arsenal. These include OneDrive for Business, SharePoint Online, and Microsoft Teams; all of which could be potential next steps in your SaaS journey! Each tool is a game-changer in their own right, and each will bring incredible collaboration value to your associates.

AIS has helped many customers migrate large and complex on-premise environments to Office 365.

Whether you need to:

  • Quickly migrate Exchange to Exchange Online for End of Support
  • Move File services to OneDrive and SharePoint Online for your Personal drives/Enterprise Shares/Cloud File Services
  • Adopt Microsoft Teams from Slack/HipChat/Cisco Teams
  • Migrate large and complex SharePoint farm environments to SharePoint Online

Whatever it is, we’ve got you covered.

What to do next?

Modern Workplace Assessment for Exchange 2010

Take action right now, and start a conversation with AIS today. Our experts will analyze the current state and rollout migration of your organization to Office 365 quickly and seamlessly.

To accelerate your migration to Office 365, let us provide you a free Modern Workplace assessment to evaluate comprehensively:

  • Organization readiness for adoption of Office 365 (Exchange and desktop-focused)
  • Desktop-focused insights and opportunities to leverage Microsoft 365 services
  • Develop total cost of ownership (TCO) for migrating Exchange users to Office 365 including fees for licensing
  • Migration plans cover detailed insights and approaches for service migrations such as…
    • Exchange to Exchange Online
    • File servers, personal shares, and Enterprise shares to OneDrive for Business and SharePoint Online
    • Slack / HipChat / Cisco Teams to Microsoft Teams
    • SharePoint Server to SharePoint Online

GET AN ASSESSMENT OF YOUR EXCHANGE 2010 ENVIRONMENT

Wrapping Up

Migrating your email to Office 365 is your best and simplest option to help you retire your Exchange 2010 deployment. With a migration to Office 365, you can make a single hop from old technology to state-of-the-art features.

AIS has the experience and expertise to evaluate and migrate your on-premise Exchange and collaboration environments to the cloud. Let us focus on the business of migrating your on-premises applications to Office 365, so you can focus on the business of running your business. This is the beginning of a journey, but something AIS is familiar and comfortable guiding you to seamless and successful cloud migration. If you’re interested in learning more about our free modern workplace assessment or getting started with your Exchange migration, reach out to AIS today.

NOT SURE WHERE TO START? REACH OUT TO AIS TO START THE CONVERSATION.

As we think about services that Azure can offer, we often think about apps (e.g., App Services, AKS, and Service Fabric) and data (e.g., Azure Storage, Data Bricks, and Azure Data Lake). It turns out that you can also leverage Azure networking, also known as as a Network-as-a-Service (NaaS). Network is a basic building block for all app and data services in Azure. But, by NaaS, I am explicitly talking about leveraging Azure networking in a *standalone* manner. Allow me to explain what I mean: In the picture below, you will see a representation of the Azure global footprint. It is so vast that it includes 100K+ miles of fiber and subsea cables, and 130 edge locations connecting over 50 regions worldwide. Think of NaaS as a way to tap into Azure’s global infrastructure to improve network performance and resilience of your applications, regardless of whether the apps are hosted in Azure or not.

Azure's Global Footprint

Let’s discuss two specific Azure networking Services that offer NaaS capabilities. Note, there are other services like Azure Firewall – think firewall as a service – that can fall in the NaaS category. However, I am limiting my discussion to two services – Azure Front Door Service and Azure Virtual WAN Service. In my opinion, these services closely align with a focus on leveraging Azure network infrastructure and services in a standalone manner.

Azure Front Door Service Icon

Azure Front Door Service

Azure Front Door service allows you to define global routing for your applications that optimize performance and resilience. Front Door is a layer 7 (HTTP/HTTPS) service. Please refer to the diagram

below for a high-level view of how Front Door works – you can advertise your application’s URL using the anycast protocol. This way, traffic directed towards your application will get picked up by the “closest” Azure Front Door service and routed to your application hosted in Azure on-premises – for applications hosted outside of Azure, the traffic will traverse the Azure network to the point of exit closest to the location of the app.

The primary benefit of using the Azure Front Door service is to improve the network performance by routing over the Azure backbone (instead of the long-haul public internet). It turns out there are several secondary benefits to highlight: You can increase the reliability of your application by having Front Door provide instant failover to a backup location. The Azure Front Door service uses smart health probes to check for the health of your application. Additionally, Front Door offers SSL termination and certificate management, application security via Web Application Firewall, and URL based routing.

Routing Azure Front Door

Azure Virtual WAN

Azure Virtual WAN

Azure Virtual WAN offers branch connectivity to, and through, Azure. In essence, think of Azure regions as hubs, that along with the Azure network backbone can help you establish branch-to-VNet and branch-to-branch connectivity.

You are probably wondering how Virtual WAN relates to existing cloud connectivity options like point-to-site, site-to-site, and express route. Azure WAN brings together the above connectivity options into a single operational interface.

Azure Virtual Wan Branch Connectivity

The following diagram illustrates a client’s virtual network overlay over the Azure backbone. The Azure WAN virtual hub is located in the Western Europe region. The virtual hub is a managed virtual network, and in turn, enables connectivity to VNets in Western Europe (VNetA and VNetB) and an on-premises branch office (testsite1) connected via site-to-site VPN tunnel over IPSec. An important thing to note is that the site-to-site connection is hooked to the virtual hub and *not* directly to the VNet (as is the case with a virtual network gateway).

Virtual Network Overlay Azure Backbone

Finally, you can work with one of many Azure WAN partners to automate the site-to-site connection including setting up the branch device (VPN/SD-WAN – software defined wide area network) that automates the connectivity setup with Azure.

I just returned from Microsoft BUILD 2019 where I presented a session on Azure Kubernetes Services (AKS) and Cosmos. Thanks to everyone who attended. We had excellent attendance – the room was full! I like to think that the audience was there for the speaker 😊 but I’m sure the audience interest is a clear reflection of how popular AKS and Cosmos DB are becoming.

For those looking for a 2-minute overview, here it is:

In a nutshell, the focus was to discuss the combining Cloud-Native Service (like AKS) and a Managed Database

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

We started with a discussion of Cloud-Native Apps, along with a quick introduction to AKS and Cosmos. We quickly transitioned into stateful app considerations and talked about new stateful capabilities in Kubernetes including PV, PVC, Stateful Sets, CSI, and Operators. While these capabilities represent significant progress, they don’t match up with external services like Cosmos DB.

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Cloud Native Tooling

One option is to use Open Service Broker – It allows Kubernetes hosted services to talk to external services using cloud-native tooling like svcat (Service Catalog).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck svcat

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck SRE

External services like Cosmos DB can go beyond cluster SRE and offer “turn-key” SRE in essence – Specifically, geo-replication, API-based scaling, and even multi-master writes (eliminating the need to failover).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Mutli Master Support

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Configure Regions

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Portability

Since the Open Service Broker is an open specification, your app remains mostly portable even when you move to one cloud provider to another. OpenService Broker does not deal with syntactic differences, say connection string prefix difference between cloud providers.  One way to handle these differences is to use Helm.

Learn more about my BUILD session:

Here you can find the complete recording of the session and slide deck: https://mybuild.techcommunity.microsoft.com/sessions/77138?source=sessions#top-anchor

Additionally, you can find the code for the sample I used here: https://github.com/vlele/build2019 

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT