Rehosting Considerations

What is Rehosting?

Rehosting is an approach to migrating business applications hosted on-premises in data center environments to the cloud by moving the application “as-is,” with little to no changes to the functionality. A common rehosting scenario is the migration of applications that were initially developed for an on-premises environment to take advantage of cloud computing benefits. These benefits may include increased availability, faster networking speeds, reduced technical debt, and a pay-per-use cost structure.

In our experience, rehosting is well suited for organizations under time-sensitive data center evacuation mandates, facing pressure from leadership to migrate, running COTS software that doesn’t support modernization, or those with business-critical applications on end-of-support technologies. These organizations often opt for a rehost then transform strategy, as reviewed in the following blog, Cloud Transformation Can Wait… Get Me to the Cloud Now!

Below we outline important considerations, benefits, and challenges associated with a rehost strategy, based on our real-world experiences moving both custom and packaged commercial on-premises applications to the cloud. We’ll also discuss steps to take when your migration initiative goes beyond rehosting, requiring the assessment of alternative migration approaches, such as re-platforming and refactoring.

Critical Considerations

When moving on-premises business-critical applications to the cloud, there are critical considerations that span technical, operational, and business domains. Below are three key components not to be overlooked when defining your cloud migration strategy:

  • Establishing a shared vision: Ensuring you have set goals and an executive sponsor.
  • Understanding your why: Why are you migrating to the cloud in the first place?
  • Defined business impact: What impact do you expect from your migration efforts and are your goals realistic based on the chosen approach?

Establishing a Shared Vision

Establish a Shared Vision with Stakeholders

The landscape of on-premises systems is often governed by many stakeholders, both business and IT, with competing goals, risk profiles, and expected outcomes from a migration effort. Having a clear vision for your rehost initiative with key roles and responsibilities defined is critical to the timeliness, investment, and overall success of your project. Finding an executive sponsor to unite the various groups, make decisions, and define the business goals and expected outcomes is vital in risk management.

As part of creating this shared vision, the executive sponsor needs to ensure:

  • Goal Alignment: Having a shared vision among various business and IT stakeholders set direction and expectations for the project. A shared vision allows all parties, including vendors and internal resources, to understand the goal(s) and the role they’ll play for the project.
  • Sufficient Budgeting and Resource Allocation: Appropriate budget and resources must be allocated for executing tasks related to this partnership, before the start of the migration effort to ensure timely project completion.
  • Proper Documentation of Existing Systems: Critical information about on-premises systems and operations is often either insufficient or missing entirely. System documentation is mandatory to migrate systems and uphold their intended purpose.
  • Product Ownership: On-premises business application suites are often acquired or internally developed. Original vendors may be out of business, so products are no longer viable. Conversely, the custom product may no longer be supported or understood due to missing source code. An owner needs to be designated to determine the future of the product.
  • Organizational Change Management: Without user adoption, your cloud migration will fail. Change management efforts enable cloud technology adoption and require proper planning and execution from the start.

The considerations outlined above should be discussed up front, and partnerships among stakeholder groups must be established to accomplish the intended goal(s) of migration under executive sponsor leadership.

Understand Your Why

Understand Why You're Moving

You’ve heard the stories about failed cloud migrations. Those stories often start with misaligned expectations and a rushed execution, which are among the top reasons cloud migrations result in a backslide to on-premises environments. Migrating to the cloud isn’t a silver bullet – not every organization will experience cost savings or even immediate functionality improvements from a rehosting project but there are many opportunities for cost avoidance and optimization of spend.

As an IT director or manager, it’s critical to ensure executive-level sponsors understand how different migration approaches align to anticipated outcomes. There’s often pressure from leadership to migrate to the cloud, and understandably with countless cloud benefits and the many challenges associated with aging on-premises solutions. However, understanding and communicating what’s realistic for your organization and how different approaches will address various business goals is crucial.

Data Center Evacuations & Unsupported Technology

Organizations migrating based on a mandated data center evacuation or the security and compliance risks associated with unsupported or end-of-support technology often look to a rehost strategy as a first step. This helps accomplish the business goal of reducing technical debt or remediating compliance concerns quickly.

Reaping the Benefits of Cloud-Native Solutions

There are many other reasons organizations look to the cloud, such as staying competitive, increasing time to value, or the ability to innovate. To fully realize the cloud outcomes that motivate these decisions – including greater flexibility, scalability, data security, built-in disaster recovery options, and improved collaboration – additional planning and refactoring of on-premises applications are often required. In these cases, sometimes we see a rehost as the first stage (as leadership wants to see quick results or has made a public commitment to migrate to the cloud), followed by more advanced modernization efforts.

To get to the root of goals and expectations, consider the following questions as you build your roadmap:

  1. What are your business objectives for cloud adoption, and how will they help further the company vision?
  2. Is there a set timeline to complete the cloud migration effort?
  3. What internal and external resources are available to support a cloud migration?
  4. How many applications are in your portfolio, and do you plan to migrate everything, or are you considering a hybrid model approach?
  5. What are the technical requirements and interdependencies of your applications? How will you assess cloud readiness?
  6. What are the necessary governance, security, and compliance considerations?
  7. Who will be responsible for moving workloads to the new cloud platform? Who will perform the migration, and manage the workloads? Will you be doing it by yourself, or will it be a shared initiative?
  8. How do you intend to use automation to reduce manual efforts and streamline provisioning and governance?

As you answer the questions above, you may find that a rehost effort is sufficient. Likewise, you may choose to explore a lead horse application approach as part of your migration strategy to better understand the value derived from various modernization tactics.

Uncovered Benefits of the Cloud

Uncover Additional Cloud Benefits

If your organization is interested in exploring cloud benefits that go beyond what a rehost effort can provide, migration options that are more involved than rehosting may be worth your consideration. Organizations looking to modernize through re-platforming or refactoring may be motivated by cloud benefits such as:

  • Faster time to market, product release cycles, and/or pace of innovation
  • Enriched customer and end-user experiences
  • Improved employee technology, collaboration, and processes
  • Better reliability and networking speeds
  • Reduced cost of labor and/or maintenance
  • Ability to leverage emerging technology
  • Built-in disaster recovery options
  • Flexibility and scalability
  • Data security
  • Cost allocation for budgeting and showback/chargeback
  • Move from Capex to Opex (or realize Capex by buying resource commitments)

If you are facing tight timelines to migrate, a rehost effort can get you one step closer to realizing the above benefits. Through an initial migration, you can look to a proof of concept to gain a further understanding of the business impact various approaches have to offer while incrementally progressing cloud transformation.

TO LEARN MORE ABOUT THE DIFFERENT APPROACHES TO MIGRATION AND MODERNIZATION, DOWNLOAD OUR FREE WHITEPAPER.

Challenges

While rehosting is a faster, less resource-intensive migration approach and a great first step into the cloud, it comes with challenges and limitations.

The primary limitation of migrating certain on-premises applications to the cloud is the application’s inherent cloud compatibility. Specific applications have internal and external dependencies which limit their ability to take advantage of more advanced cloud benefits once rehosted.

While rehosting allows you to modernize the application environment, resulting in outcomes such as reduced Data Center costs, other cloud benefits aren’t fully realized. Outcomes such as elasticity and the ability to take advantage of cloud-native features are not available with a strictly rehost strategy.

While more cost-effective than on-premises hosting, sometimes it can be more costly to run applications in the cloud when rehosting, versus re-platforming or refactoring without a FinOps strategy to master the unit economics of cloud for competitive advantage. To show fast progress, rehosting is often a great transitional stage for working towards a cost-effective cloud solution, especially for organizations on a tight timeline. During this stage, managing cloud costs and realizing cloud value with a FinOps practice is key.

Feeling Stuck?

Not Sure Where to Start?

If you’re stuck in analysis paralysis, work with a consultant that’s been through various migration projects, from start to finish, and understands the common challenges and complexities of different approaches.

Whether you’re considering Azure, Office 365, Power Platform, or another cloud platform, AIS has a range of Adoption Frameworks and Assessments that can help you understand your options. With our help, create a shared vision, and align business goals to the appropriate migration approaches.

GET YOUR ORGANIZATION ON THE RIGHT TRACK TO CLOUD MIGRATION. CONTACT AIS TODAY TO DISCUSS YOUR OPTIONS.

Lift n Shift Approach to Cloud Transformation

What does it mean to Rehost an application?

Rehosting is an approach to migrating business applications hosted in on-premises data center environments to the cloud by moving the application “as-is,” with little to no changes to the business functions performed by the application. It’s a faster, less resource-intensive migration approach that gets your apps into the cloud without much code modification. It is often a good first step to cloud transformation.

Organizations with applications that were initially developed for an on-premises environment commonly look to rehosting to take advantage of cloud computing benefits. These benefits may include increased availability and networking speeds, reduced technical debt, and a pay-per-usage cost structure. When defining your cloud migration strategy, it’s essential to analyze all migration approaches, such as re-platforming, refactoring, replacing, and retiring.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

Read More…

Exchange 2010 is at the end of its journey, and what a long road it’s been! For many customers, it has been a workhorse product facilitating excellent communications with their employees. It’s sad to see the product go, but it’s time to look to the future, and the future is in the Microsoft Cloud with Exchange Online

What does End of Support mean for my organization?

email security exchange 2010 end of support risks

While Exchange 2010 isn’t necessarily vanishing from the messaging ecosystem, support of the product ends in all official capacities on January 14, 2020. Additionally, Office 2010 will be hitting the end of support on October 13, 2020, which means your old desktop clients will also be unsupported within the same year. What this means is that businesses using Exchange and Office applications will be left without support from Microsoft – paid or free. End of support also means the end of monthly security updates. Without regular security updates and patches from Microsoft to protect your environment, your company is at risk.

  1. Security risks – Malware protection and attack surface protection become more challenging as products are off lifecycle support. Any new vulnerability may not be disclosed or remediated.
  2. Compliance risks – As time goes on, organizations must adhere to new compliance requirements – for example, GDPR was a massive recent deadline. While managing these requirements on-premises is possible, it is often challenging and time-consuming. Office 365 offers improved compliance features for legal and regulatory requirements. The most notable is that Microsoft cloud environments comply with most regulatory needs, including HIPAA, FISMA, FedRAMP, and more.
  3. Lack of software and hardware support – Lack of technical support for problems that may occur such as bug fixes, stability and usability of the server, and time zone updates. Dropped support for interoperation with 3rd party vendors like MDM and message hygiene solutions can mean your end-user access can stop working. Not to mention the desktop and mobile mail solutions already deployed, or perhaps being upgraded, around this now decade-old infrastructure.
  4. Speaking of old infrastructure – This isn’t just about applications and services. For continued support and to meet compliance requirements, you must migrate to newer hardware to retain, store, and protect your mailbox and associated data. Office 365 absolves you of all infrastructure storage costs. That is a perfect opportunity, and often a justification in and of itself, to move to the cloud.

It’s time to migrate to Office 365 . . . Quickly!

There are many great reasons to move your exchange environment to a hosted environment in Office 365. The biggest one being your company will no longer have to worry about infrastructure costs.

Here are some other significant advantages that you won’t have to worry about:

  • Purchasing, and maintaining expensive storage and hardware infrastructure
  • Time spent keeping up to date on product, security, and time zone fixes
  • Time spent on security patching OS or updating firmware
  • Cost for licensing OS or Exchange Servers
  • Upgrading to a new version of exchange; you’re always on the latest version of Exchange in Office 365
  • Maintaining compliance and regulations for your infrastructure whether Industry, Regional or Government
  • With an environment of thousands of users and potentially unlimited mailboxes, absolving your admins from the day-to-day database, storage, server, and failover management is a huge relief and cost savings by having your team focus on Exchange administration.

Another big cost for on-premises is storage: data repositories for mailbox data retention, archiving, and journaling. This value cannot be overstated – do you have large mailboxes or archive mailboxes? Are you paying for an archiving or eDiscovery solution? If you consider Exchange Online Plan 2 licenses (often bundled into larger enterprise licenses such as E3 or E5) they allow for archive mailboxes of unlimited size. These licenses also offer eDiscovery and compliance options that meet the needs of complex organizations.

The value of the integrated cloud-based security and compliance resources in the Office 365 environment is immense. Many of our customers have abandoned their entire existing MDM solutions in favor of Intune. Data Loss Prevention allows you to protect your company data against exfiltration. Office 365 Advanced Threat Protection fortifies your environment against phishing attacks and offers zero-day attachment reviews. These technologies are just the tip of the iceberg and can either replace or augment an existing malware and hygiene strategy. And all these solutions specifically relate and interoperate with Exchange Online.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

Some other technologies that seamlessly work with Exchange Online and offer integral protections to that product as well as other Microsoft cloud and SaaS solutions:

  • Conditional Access – Precise, granular access control to applications
  • Intune – Device and application management and protection
  • Azure AD Identity Protection – Manage risk levels for associate activity
  • Azure Information Protection – Classify and protect documents
  • Identity Governance – Lifecycle management for access to groups, roles, and applications

Think outside the datacenter

data center transformation services

There are advantages to thinking outside the (mail)box when considering an Exchange migration strategy to the cloud. Office 365 offers an incredible suite of interoperability tools to meet most workflows. So while we can partner with you on the journey to Office 365, don’t overlook some of the key tools that are also available in the Microsoft arsenal. These include OneDrive for Business, SharePoint Online, and Microsoft Teams; all of which could be potential next steps in your SaaS journey! Each tool is a game-changer in their own right, and each will bring incredible collaboration value to your associates.

AIS has helped many customers migrate large and complex on-premise environments to Office 365.

Whether you need to:

  • Quickly migrate Exchange to Exchange Online for End of Support
  • Move File services to OneDrive and SharePoint Online for your Personal drives/Enterprise Shares/Cloud File Services
  • Adopt Microsoft Teams from Slack/HipChat/Cisco Teams
  • Migrate large and complex SharePoint farm environments to SharePoint Online

Whatever it is, we’ve got you covered.

What to do next?

Modern Workplace Assessment for Exchange 2010

Take action right now, and start a conversation with AIS today. Our experts will analyze the current state and rollout migration of your organization to Office 365 quickly and seamlessly.

To accelerate your migration to Office 365, let us provide you a free Modern Workplace assessment to evaluate comprehensively:

  • Organization readiness for adoption of Office 365 (Exchange and desktop-focused)
  • Desktop-focused insights and opportunities to leverage Microsoft 365 services
  • Develop total cost of ownership (TCO) for migrating Exchange users to Office 365 including fees for licensing
  • Migration plans cover detailed insights and approaches for service migrations such as…
    • Exchange to Exchange Online
    • File servers, personal shares, and Enterprise shares to OneDrive for Business and SharePoint Online
    • Slack / HipChat / Cisco Teams to Microsoft Teams
    • SharePoint Server to SharePoint Online

GET AN ASSESSMENT OF YOUR EXCHANGE 2010 ENVIRONMENT

Wrapping Up

Migrating your email to Office 365 is your best and simplest option to help you retire your Exchange 2010 deployment. With a migration to Office 365, you can make a single hop from old technology to state-of-the-art features.

AIS has the experience and expertise to evaluate and migrate your on-premise Exchange and collaboration environments to the cloud. Let us focus on the business of migrating your on-premises applications to Office 365, so you can focus on the business of running your business. This is the beginning of a journey, but something AIS is familiar and comfortable guiding you to seamless and successful cloud migration. If you’re interested in learning more about our free modern workplace assessment or getting started with your Exchange migration, reach out to AIS today.

NOT SURE WHERE TO START? REACH OUT TO AIS TO START THE CONVERSATION.

As we think about services that Azure can offer, we often think about apps (e.g., App Services, AKS, and Service Fabric) and data (e.g., Azure Storage, Data Bricks, and Azure Data Lake). It turns out that you can also leverage Azure purely as a Network-as-a-Service (NaaS) –Network is a basic building block for all the app and data services in Azure. But, by NaaS, I am explicitly talking about leveraging Azure networking in a *standalone* manner. Allow me to explain what I mean:  In the picture below, you will see a representation of the Azure global footprint. It is so vast that it includes 100K+ miles of fiber and subsea cables, and 130 edge locations connecting over 50 regions worldwide. Think of NaaS as a way to tap into Azure’s global infrastructure to improve network performance and resilience of your applications, regardless of whether the apps are hosted in Azure or not.

Azure's Global Footprint

Let’s discuss two specific Azure Services that offer NaaS capabilities. Note, there are other services like Azure Firewall – think firewall as a service – that can fall in the NaaS category. However, I am limiting my discussion to two services – Azure Front Door and Azure Virtual WAN. In my opinion, these services closely align with a focus on leveraging Azure network infrastructure and services in a standalone manner.

Azure Front Door Service Icon

Azure Front Door Service

Azure Front Door service allows you to define global routing for your applications that optimize performance and resilience. Front Door is a layer 7 (HTTP/HTTPS) service. Please refer to the diagram below for a high-level view of how Front Door works – you can advertise your application’s URL using the anycast protocol. This way, traffic directed towards your application will get picked up by the “closest” Azure Front Door and routed to your application hosted in Azure on-premises – for applications hosted outside of Azure, the traffic will traverse the Azure network to the point of exit closest to the location of the app.

The primary benefit of using Azure Front Door is to improve the network performance by routing over the Azure backbone (instead of the long-haul public internet). It turns out there are several secondary benefits to highlight: You can increase the reliability of your application by having Front Door provide instant failover to a backup location. Azure Front Door uses smart health probes to check for the health of your application. Additionally, Front Door offers SSL termination and certificate management, application security via Web Application Firewall, and URL based routing.

Routing Azure Front Door

Azure Virtual WAN

Azure Virtual WAN

Azure Virtual WAN offers branch connectivity to, and through, Azure. In essence, think of Azure regions as hubs, that along with the Azure network backbone can help you establish branch-to-VNet and branch-to-branch connectivity.

You are probably wondering how Virtual WAN relates to existing cloud connectivity options like point-to-site, site-to-site, and express route. Azure WAN brings together the above connectivity options into a single operational interface.

Azure Virtual Wan Branch Connectivity

The following diagram illustrates a client’s virtual network overlay over the Azure backbone. The Azure WAN virtual hub is located in the Western Europe region. The virtual hub is a managed virtual network, and in turn, enables connectivity to VNets in Western Europe (VNetA and VNetB) and an on-premises branch office (testsite1) connected via site-to-site VPN tunnel over IPSec. An important thing to note is that the site-to-site connection is hooked to the virtual hub and *not* directly to the VNet (as is the case with a virtual network gateway).

Virtual Network Overlay Azure Backbone

Finally, you can work with one of many Azure WAN partners to automate the site-to-site connection including setting up the branch device (VPN/SD-WAN – software defined wide area network) that automates the connectivity setup with Azure.

I just returned from Microsoft BUILD 2019 where I presented a session on Azure Kubernetes Services (AKS) and Cosmos. Thanks to everyone who attended. We had excellent attendance – the room was full! I like to think that the audience was there for the speaker 😊 but I’m sure the audience interest is a clear reflection of how popular AKS and Cosmos DB are becoming.

For those looking for a 2-minute overview, here it is:

In a nutshell, the focus was to discuss the combining Cloud-Native Service (like AKS) and a Managed Database

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

We started with a discussion of Cloud-Native Apps, along with a quick introduction to AKS and Cosmos. We quickly transitioned into stateful app considerations and talked about new stateful capabilities in Kubernetes including PV, PVC, Stateful Sets, CSI, and Operators. While these capabilities represent significant progress, they don’t match up with external services like Cosmos DB.

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Cloud Native Tooling

One option is to use Open Service Broker – It allows Kubernetes hosted services to talk to external services using cloud-native tooling like svcat (Service Catalog).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck svcat

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck SRE

External services like Cosmos DB can go beyond cluster SRE and offer “turn-key” SRE in essence – Specifically, geo-replication, API-based scaling, and even multi-master writes (eliminating the need to failover).

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Mutli Master Support

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Configure Regions

Microsoft Build Session Architecting Cloud-Native Apps with AKS and Cosmos DB Slide Deck Portability

Since the Open Service Broker is an open specification, your app remains mostly portable even when you move to one cloud provider to another. OpenService Broker does not deal with syntactic differences, say connection string prefix difference between cloud providers.  One way to handle these differences is to use Helm.

Learn more about my BUILD session:

Here you can find the complete recording of the session and slide deck: https://mybuild.techcommunity.microsoft.com/sessions/77138?source=sessions#top-anchor

Additionally, you can find the code for the sample I used here: https://github.com/vlele/build2019 

WORK WITH THE BRIGHTEST LEADERS IN SOFTWARE DEVELOPMENT

As developers, we spend a lot of time developing APIs. Sometimes it’s to expose data that we’ve transformed or to ingest data from other sources. Coincidentally, more and more companies are jumping into the realm of API Management—Microsoft, Google, MuleSoft and Kong all have products now that provide this functionality. With this much investment from the big players in the tech industry, API management is obviously a priority. Now, why would anyone want to use an API Management tool?

The answer is simple: It allows you to create an API Gateway that you can load all your APIs into, providing a single source to query and curate. API Management makes life as an admin, a developer, and a consumer easier by providing everything for you in one package.

Azure API Management

Azure API Management logoWhat does Azure API Management provide? Azure API Management (APIM) is a cloud-based PaaS offering available in both commercial Azure and Azure Government. APIM provides a one-stop-shop for API authority, with the ability to create products, enforce policies, and utilize a robust developer portal.

Not only can API Management integrate seamlessly with your existing Azure infrastructure, but it can also manage APIs that exist on-prem and in other clouds. APIM is also available in both the IL4 and IL5 environments in Azure Government, which allows for extensibility and management for those working in the public sector.

APIM leverages a few key concepts to provide its functionality to you as a developer, including:

  • Products
  • Policies
  • Developer Portal

From providing security to leveraging rate-limiting and abstraction, Azure API Management does it all for API consolidation and governance in Azure. Any API can be ingested, and it gets even easier when APIs follow the OpenAPI Format.

What Are Products?

Products are a layer of abstraction provided inside APIM. Products allow you to create subsets of APIs that are already ingested into the solution—allowing you to overlap the use of APIs while restricting the use of individual collections of APIs. This level of compartmentalization allows you to not only separate your APIs into logical buckets but also enforce rules on these products separately, providing one more layer of control.

Product access is very similar to Azure RBAC—with different groups created inside of the APIM instance. These groups are yet another way for APIM admins to encapsulate and protect their APIs, allowing them to add users already associated to the APIM instance into separate subsets. Users can also be members of multiple groups, so admins can make sure the right people have access to the right APIs stored in their APIM instance.

What Are Policies?

Policies are APIM’s way of enforcing certain restrictions and providing a more granular level of control. There is an entire breadth of policies available in APIM, which range from simply disallowing usage of the API after calling it five times, to authentication, logging, caching, and transformation of requests or responses from JSON to XML and vice versa. Policies are perhaps the most powerful function of APIM and drive the control that everyone wants and need. Policies are written in XML and can be easily edited within the APIM XML Editor. Policies can also leverage C# 7 Syntax, which brings the power of the .NET Framework to your APIM governance.

What Is the Developer Portal?

The Azure API Management Developer Portal is an improved version of the Swagger documentation that’s generated when you use the OpenAPI spec. The Developer Portal provides an area for developers to readily see APIs, products, and associated applications. The Portal also provides sample request bodies (no more guessing API request structures!) and responses, along with code samples in many different languages.

Finally, the portal also allows you to try API calls with customized request bodies and headers, so you have the ability to see exactly what kind of call you want to make. Along with all that functionality, you can also download your own copy of the OpenAPI Spec for your API after it’s been ingested into your instance.

Why Should I Use APIM?

Every business should be using some form of API Management. You’ll be providing yourself a level of control previously not available. By deploying an API Gateway, that extra layer of abstraction allows for much tighter control of your APIs. Once an API has been ingested, APIM provides many additional functionalities.

First, you can match APIs to products, providing a greater level of compartmentalization. Second, you can add different groups to each product, with groups being subsets of users (i.e. Back-end Devs, Billing Devs, etc.). Third, you automatically generate a robust developer portal, which provides all of the functionality of the Swagger portal, but with added features, such as code snippets.  Finally, APIM also has complete integration with Application Insights in commercial Azure, providing access to a world-class logging and visualization tool.

Azure API Management brings power to the user, and no API should be left out.

I recently encountered an issue when trying to create an Exact Age column for a contact in Microsoft Dynamics CRM. There were several solutions available on the internet, but none of them was a good match for my specific situation. Some ideas I explored included:

  1. Creating a calculated field using the formula DiffInDays(DOB, Now()) / 365 or DiffInYears(DOB, Now()) – I used this at first, but if the calculated field is a decimal type, then you end up with a value like 23 years old which is not desirable. If the calculated field is a whole number type, then the value is always the rounded value. So, if the DOB is 2/1/1972 and the current date is 1/1/2019, the Age will be 47 when the contact is actually still 46 until 2/1/2019.
  2. Using JavaScript to calculate the Age – The problem with this approach is that if the record is not saved, then the data becomes stale. This one also does not work with a view (i.e., if you want to see a list of client ages). The JavaScript solution seems more geared towards the form of UI experience only.
  3. Using Workflows with Timeouts – This approach seemed a bit complicated and cumbersome to update values daily across so many records.

Determining Exact Age

Instead, I decided to plug some of the age scenarios into Microsoft Excel and simulate Dynamic CRM’s calculations to see if I could come up with any ideas.

Note: 365.25 is used to account for leap years. I originally used 365, but the data was incorrect. After reading about leap years, I decided to plug 365.25 in, and everything lined up.

Excel Formulas

Setting up the formulas above, I was able to calculate the values below. I found that subtracting the DATEDIF Rounded value from the DATEDIF Actual value produced a negative value when the month/day was after the current date (2/16/2019 at the time). This allowed me to introduce a factor of -1 when the Difference was less than or equal to 0.  Using this finding, I set up the solution in CRM.

Excel Calculations

The Solution

  1. Create the necessary fields.
    Field  Data Type  Field Type  Other  Formula 
    DOB  Date and Time  Simple  Behavior: User Local   
    Age Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(new_dob, Now()) / 365.25 
    Age Rounded  Whole Number  Calculated    DiffInDays(new_dob, Now()) / 365.25 
    Age Difference  Decimal Number  Calculated  Precision: 10  new_ageactual – new_agerounded 
    Age  Whole Number  Calculated    See below 
  1. Create a business rule for DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Age calculated field as follow:
    Age Calculated Field

Once these three steps have been completed, your new Age field should be ready to use. I created a view to verify the calculations. I happened to be writing this post very late on the night of 2/16/2019. I wrote the first part before 12:00 a.m., then I refreshed the view before taking the screenshot below. I was happy to see Age Test 3 record flip from 46 to 47 when I refreshed after 12:00 a.m.

Age Solution Results

Determining Exact Age at Some Date in the Future

The requirement that drove my research for this solution was the need to determine the exact age in the future. Our client needed to know the age of a traveler on the date of travel. Depending on the country being visited and the age of the traveler on the date of departure, different forms would need to be sent in order to prevent problems when the traveler arrived at his or her destination. The solution was very similar to the Age example above:

The Solution

  1. Here is an overview of the entity hierarchy:
    Age at Travel Entities
  2. Create the necessary fields.
    Entity  Field  Data Type  Field Type  Other  Formula 
    Trip  Start Date  Date and Time  Simple  Behavior: User Local   
    Contact  DOB  Date and Time  Simple  Behavior: User Local   
    Trip Contact  Age at Travel Actual  Decimal Number  Calculated  Precision: 10  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Rounded  Whole Number  Calculated  n/a  DiffInDays(contact.dobnew_trip.start) / 365.25 
    Trip Contact  Age at Travel Difference  Decimal Number  Calculated  Precision: 10  new_ageattravelactual – new_ageattravelrounded 
    Trip Contact  Age at Travel  Whole Number  Calculated  n/a  See below 
  1. Create a business rule for Contact DOB; setting it equal to birthdate when birthdate contains data. This way when birthdate is set, the DOB is set automatically. This arrangement is necessary for other calculated fields.
    Business Rules
  2. Set up the Trip Contact’s Age at Travel calculated field as follow:
    Age at Travel Calculated Field

Once these steps have been completed, your new Age at Travel field should be ready to use. I created a view to verify the calculations.

You’ll notice that in the red example, the trip starts on 8/14/2020. The contact was born on 9/29/2003 and is 16 on the date of travel but turns 17 a month or so later. In the green example, the trip is also on 8/14/2020. The contact was born 4/12/2008 and will turn 12 before the date of travel.

Age at Travel Solution Results

Conclusion

While there are several approaches to the Age issue in Dynamics CRM, this is a great alternative that requires no code and works in real time. I hope you find it useful!

Define cloud apps and infrastructure in your favorite language and deploy to any cloud with Pulumi.

Pulumi logoIf you search the Internet for Infrastructure-as-Code (IaC), it’s pretty easy to come up with a list of the most popular tools: Chef, Ansible, Puppet, Terraform…and the freshman to the IaC:  PULUMI.

It’s 4 a.m. and the production server has gone down. You can’t keep calm?

Sure, how tough is it? Except that you’ll probably need to recall what you did a year ago to set up your environment, then desperately try to figure out what you’ve installed or implemented or configured since. Finally, you’ve gathered all your findings to closely replicate the environment.

Wouldn’t it be nice to have something that manages all this configuration for you? No, there aren’t robots coming to take over the DevOps team yet. I’m talking about using Infrastructure-as-Code to automatically and consistently manage infrastructure configuration.

What is Infrastructure as Code (IaC)?

As the name suggests, Infrastructure-as-Code is the concept of managing your operations environment in the same way you manage applications or other code.

Infrastructure as code simply means to convert your infrastructure into code, where it is managed by some kind of version control system (e.g., Git), and stored in a repository where you can manage it similar to your application.

Pulumi: the new IaC tool

While learning Azure, I tried implementing IaC with Azure Resource Manager Templates (aka ARM Templates). For this, I learned Powershell and wrote several templates using it. As a developer, PowerShell isn’t the language I use on a daily basis to write my code, but I use Javascript abundantly for many of my projects.

Then the internet community whispered about Pulumi.

I’ve tried my hand at Pulumi and the experience has been very enlightening, so I’m sharing some of the more important and interesting findings with you all.

Pulumi is a multi-language and multi-cloud development platform.

Pulumi supports all major clouds — including Amazon Web Services (AWS), Azure and Google Cloud, as well as Kubernetes clusters. It lets you create all aspects of cloud programs using real languages (Pulumi currently supports JavaScript, TypeScript, and Python, with more languages supported in the future) and real code, from infrastructure on up to the application itself. Just write programs and run them, and Pulumi figures out the rest.

Using real languages unlocks tremendous benefits:

  • Familiarity: no need to learn new bespoke DSLs or YAML-based templating languages.
  • Abstraction: build bigger things out of smaller things.
  • Sharing and reuse: we leverage existing language package managers to share and reuse these abstractions, either with the community, within your team, or both.
  • Full control: use the full power of your language, including async, loops, and conditionals.

My favorite things about Pulumi

  1. Multi-Language and real language: Using general-purpose programming languages reduces the learning curve and makes it easier to express your configuration requirements.
  2. Developer friendly and easily configurable: Pulumi bridges the gap between Development and Operations teams by not treating application code and infrastructure as separate things. Developers can easily list out dependencies in the package.json file. The below snippet explains:
{
   "name": "azure-javascript",  // Name of the Pulumi project
   "main": "index.js",          // start point of the Pulumi program.
   "dependencies": {            // Dependencies with version number to be
       "@pulumi/pulumi": "latest",   installed with NPM
       "@pulumi/azure": "latest",
       "azure-storage": "latest",
       "mime": "^2.4.0"
   }
}

The YAML is created while we initialize the Pulumi Stack to configure all the parameters required for the program like credentials, location, etc.

  1. Reusable Components: Thanks to having a real language, we can build higher-level abstractions.

Below is one of my example code snippets using a Pulumi component that creates an instance of the Azure Resource Group to be used in other programs. You can find the full source code that provisions Azure Load Balancer GitHub Code.

class ResourceGroup extends pulumi.ComponentResource {
    constructor(resourceGroupName, location,path, opts)
    {
    	 super("az-pulumi-createstorageaccount:ResourceGroup", resourceGroupName,location, {}, opts); 

         console.log(`Resource Group ${resourceGroupName} : location ${location} `);
    	 // Create an Azure Resource Group
		const resourceGroup = new azure.core.ResourceGroup(resourceGroupName, 
		{
		    location:location,
		},

           { 
              parent: this 
           }
        );

	   // Create a property for the resource group name that was created
        this.resourceGroupName = resourceGroup.name,
        this.location = location
        

         // For dependency tracking, register output properties for this component
        this.registerOutputs({
            resourceGroupName: this.resourceGroupName,
           
        });

    }

}


module.exports.ResourceGroup = ResourceGroup;


This class can be instantiated as below:

// import the class 
const resourceGroup = require("./create-resource-group.js");


// Create an Azure Resource Group
// Arguments : Resource group name and location
let azureResouceGroup = new resourceGroup.ResourceGroup("rgtest","EastUS");

  1. Multi-Cloud: Pulumi supports all major clouds — including AWS, Azure and Google Cloud, as well as Kubernetes clusters. This delivers a consolidated programming model and tools for managing cloud software anywhere. There’s no need to learn three different YAML dialects, and five different CLIs, just to get a simple container-based application stood up in production.

The below code uses a single Pulumi program to provision resources in both AWS and GCP (Google Cloud Platform). The example is in typescript and it is required to install @pulumi/aws and @pulumi/gcp packages from NPM.

import * as aws from "@pulumi/aws";
import * as gcp from "@pulumi/gcp";

// Create an AWS resource (S3 Bucket)
const awsBucket = new aws.s3.Bucket("my-bucket");

// Create a GCP resource (Storage Bucket)
const gcpBucket = new gcp.storage.Bucket("my-bucket");

// Export the names of the buckets
export const bucketNames = [
awsBucket.bucket,
gcpBucket.name,
];

Pulumi ensures that resources will be created in both clouds. Let’s take a look at how Pulumi creates the plan for both clouds and deploy the resources to the respective clouds.

Previewing update (multicloud-ts-buckets-dev):

Type Name Plan
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev create
+ ├─ gcp:storage:Bucket my-bucket create
+ └─ aws:s3:Bucket my-bucket create

Resources:
3 changes
+ 3 to create

Do you want to perform this update? yes
Updating (multicloud-ts-buckets-dev):

Type Name Status
+ pulumi:pulumi:Stack multicloud-ts-buckets-multicloud-ts-buckets-dev created
+ ├─ gcp:storage:Bucket my-bucket created
+ └─ aws:s3:Bucket my-bucket created

Outputs:
bucketNames: [
[0]: "my-bucket-c819937"
[1]: "my-bucket-f722eb9"
]

Resources:
3 changes
+ 3 created

Duration: 21.713128552s

The outputs show the name of the AWS and GCP buckets respectively.

Another scenario would be to create a storage account and S3 object in Azure and AWS respectively using Pulumi.

// Creating storage account in Azure

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/azure");

const storageAccount = new azure.storage.Account(storageAccountName, {
   	resourceGroupName: rgName,
    	location: rgLocation,
    	accountTier: "Standard",
    	accountReplicationType: "LRS",
 });

// Creating S3 bucket  in AWS

const pulumi = require("@pulumi/pulumi");
const azure = require("@pulumi/aws");

const siteBucket = new aws.s3.Bucket("my-bucket",{
	website: {
    indexDocument: "index.html",
  }
});

Pulumi enables you to mix and match these cloud resources inside of the same or different program or file.

  1. Stacks: A core concept in Pulumi is the idea of a “stack.” A stack is an isolated instance of your cloud program whose resources and configuration are distinct from all other stacks. You might have a stack each for production, staging, and testing, or perhaps for each single-tenanted environment. Pulumi’s CLI makes it trivial to spin up and tear down lots of stacks.

Closing Thoughts

I would like to close this post with a statement: Cloud Renaissance for DevOps and Developers as called by the whole internet community. Building powerful cloud software will be more enjoyable, more productive, and more collaborative for the developers. Of course, everything comes with a cost: after exploring, I found that Pulumi lacks some documentation. Besides this, for developers to write IAC, a deep understanding of infrastructure is a must.

I hope that this post has given you a better idea of the overall platform, approach, and unique strengths.

Happy Puluming 🙂

As organizations increase their footprint the cloud, there’s increased scrutiny on mounting cloud consumption costs, reigniting a discussion about longer-term costs.

This is not an entirely unexpected development. Here’s why:

  1. Cost savings were not meant to be the primary motivation for moving to the cloud – At least not in the manner most organizations are moving to the cloud – which is to move their existing applications with little to no changes to the cloud. For most organizations, the primary motivation is the “speed to value,” aka the ability to offer business value at greater speeds by becoming more efficient in provisioning, automation, monitoring, the resilience of IT assets, etc.
  2. Often the cost comparisons between cloud and on-premises are not a true apples-to-apples comparison – For example, were all on-premises support staff salaries, depreciation, data center cost per square foot, rack space, power and networking costs considered? What about troubleshooting and cost of securing these assets?
  3. As these organizations achieve higher cloud operations maturity, they can realize increased cloud cost efficiency – For instance, by implementing effective auto-scaling, optimizing execution contexts by moving to dynamic consumption plans like serverless, take advantage of discounts through longer-term contracts, etc.

Claim Your Free Whitepaper

In this whitepaper, we talk about the aforementioned considerations, as well as cost optimization techniques (including resource-based, usage-based and pricing-based cost optimization).

FREE WHITEPAPER ON AZURE COST MANAGEMENT: BACKGROUND, TOOLS, AND APPROACHES

About the Podcast

During KubeCon 2018, I had the pleasure to once again be a guest on the .NET Rocks! podcast. I talked to Carl and Richard about what it means to be cloud-native, the on-going evolution, and what that all means for 2019. We talked in depth about how the cloud-native approach impacts how we build applications on the cloud. We also talked about how the Cloud Native Computing Foundation (CNCF) is fostering an ecosystem of projects like Kubernetes, Envoy and Prometheus. Finally, we talked about cloud-native computing in the context of Microsoft Azure.

Listen to the full podcast here!

Related Content

If you’re curious about what it means to be cloud-native, you may also enjoy our previous blog post, What Are Cloud-Native Technologies & How Are They Different From Traditional PaaS Offerings. In this post, we discussed the key benefits of cloud-native architecture, compared it to a traditional PaaS offering, and laid out a few use cases.