In January, AIS’ Steve Suing posted a great article titled “What to Know When Purchasing a COTS Product and Moving to the Cloud.” In his post, Steve discusses some things to consider such as security, automation, and integration. Springboarding off of those topics for this blog, I am going to discuss another major consideration: legacy data.

Many COTS implementations are done to modernize a business function that is currently being performed in a legacy application. There are some obvious considerations here. For example, when modernization a legacy app, you might ask:

  • How much of the business needs are met by purely out of the out-of-the-box functionality?
  • How much effort will be necessary to meet the rest of your requirements in the form of configuration and customization of the product or through changing your business process?

However, legacy data is a critical consideration that should not be overlooked during this process.

Dealing with Data When Retiring a Legacy System

If one of the motivators for moving to a new system is to realize cost savings by shutting down legacy systems, that means the legacy data likely needs to be migrated. If there is a considerable amount of data and length of legacy history, the effort involved in bringing that data into the new product should be carefully considered. This might seem trivial, with most products at least proclaiming to have a multitude of ways to connect to other systems but recent experience has shown me that the cost of not properly verifying the level of effort can far exceed the savings provided by a COTS product. These types of questions can help avoid a nightmare scenario:

  1. How will the legacy data be brought into the product?
  2. How much transformation of data is necessary, and can it be done with tools that already exist?
  3. Does the product team have significant experience of similarly legacy systems (complexity, amount of data, industry space, etc.) moving to the product?

The first and second conversations usually start by sitting down and talking to the product team and having them outline the common processes that take place to bring legacy data into the product. During these conversations, be sure both sides have technical folks in the room. This will ensure the conversations have the necessary depth to uncover how the process works.

Once your team has a good understanding of the migration methods recommended by the product team, start exploring some actual use cases related to your data. After exploring some of the common scenarios to get a handle on the process, jump quickly into some of the toughest use cases. Sure, the product might be able to bring in 80% of the data with a simple extract, transform, load (ETL), but what about that other 20%? Often legacy systems were created in a time long, long ago, and sometimes functionality will have been squished into them in ways that make the data messy. Also, consider how the business processes have changed over time and how the migration will address legacy data created as rules changed over the history of the system. Don’t be afraid to look at that messy data and ask the tough questions about it. This is key.

Your Stakeholders During Legacy Data Migration

Be sure those involved are not too invested in the product to decrease confirmation bias, the tendency to search for, interpret, and/or focus on information that will confirm that this is your silver bullet. This way they will ask the hard questions. Bring in a good mix of technical and business thought leaders. For instance, challenge the DBA’s and SME’s to work together to come up with things that will simply not work. Be sure to set up an environment in which people are rewarded for coming up with blockers and not demotivated by being seen as difficult. Remember every blocker you come up with in this early phase, could be the key to making better decisions, and likely have a major downstream impact.

The product of these sessions should be a list of tough use cases. Now it’s time to bring the product team again. Throw the use cases up on a whiteboard and see how the product team works through them. On your side of the table, be sure to include people that will be responsible for the effort of bringing the new system to life. With skin in the game, these people are much less likely to allow problems to be glossed over and drive a truly realistic conversation. Because this involves the tough questions, the exercise will likely take multiple sessions. Resist any pressure to get this done quickly, keeping in mind that a poor decision now, can have ripple efforts that will last for years.

After some of these sessions, both sides should have a good understanding of the legacy system, the target product, and some of the complexities of meshing the two. With that in place, ask the product team for examples of similar legacy systems they have tackled. This question should be asked up front, but it really cannot be answered intelligently until at least some of the edge cases of the legacy system are well understood. While the product team might not be able to share details due to non-disclosure agreements, they should be able to speak in specific enough terms to demonstrate experience. Including those involved in the edge case discovery sessions is a must. With the familiarity gained during those sessions, those are the people that are going to be most likely to make a good call on whether the experience being presented by the product team relates closely enough to your needs.

Using Lessons Learned to Inform Future Data Migration Projects

Have I seen this process work? Honestly, the answer is no. My hope is that the information I have presented, based on lessons learned from projects past helps others avoid some of the pitfalls I have faced and the ripple effects that could lead to huge cost overruns. It’s easy to lose sight of the importance of asking hard questions up front and continuing to do so in light of the pressure to get started. If you feel you are giving in to such pressure, just reach out to future you and ask them for some input.

And a final thought. As everyone knows, for complex business processes, COTS products don’t always perfectly align to your legacy system. There are two ways to find alignment. Customizing the system is the obvious route, although it often makes more sense to change business processes to align with a product you are investing in rather than trying to force a product to do what it was not designed for. If you find that this is not possible, perhaps you should be looking at another product. Keep in mind that if a product cannot be found that fits your business processes closely enough, it might make more financial sense to consider creating your own system from the ground up.

Lift n Shift Approach to Cloud Transformation

What does it mean to Rehost an application?

Rehosting is an approach to migrating business applications hosted in on-premises data center environments to the cloud by moving the application “as-is,” with little to no changes to the business functions performed by the application. It’s a faster, less resource-intensive migration approach that gets your apps into the cloud without much code modification. It is often a good first step to cloud transformation.

Organizations with applications that were initially developed for an on-premises environment commonly look to rehosting to take advantage of cloud computing benefits. These benefits may include increased availability and networking speeds, reduced technical debt, and a pay-per-usage cost structure. When defining your cloud migration strategy, it’s essential to analyze all migration approaches, such as re-platforming, refactoring, replacing, and retiring.

CHECK OUT OUR WHITEPAPER & LEARN ABOUT CLOUD-BASED APP MODERNIZATION APPROACHES

Read More…

In this blog post, I discuss an app modernization approach that we call “modernize-by-shifting.” In essence, we take an existing application and move it to “managed” container hosting environments like Azure Kubernetes Service or Azure Service Fabric Mesh. The primary goal of this app modernization strategy is to undertake minimal possible change to the existing application codebase. This approach to modernization is markedly different from a “lift-and-shift” approach where workloads are migrated to the cloud IaaS unchanged with little to no use of cloud native capabilities.

Step One of App Modernization by Shifting

As the first step of this approach, an existing application is broken into a set of container images that include everything needed to run a portion of the application: code, runtime, system tools, system libraries, and settings. Approaches to breaking up the application in smaller parts can vary based on original architecture. For example, if we begin with multi-tier application, each tier (e.g. presentation, application, business, data access) could map to a container image. While this approach will admittedly lead to coarser-grained images, compared to a puritanical microservices-based approach of light-weight images, it should be seen as the first step in modernizing the application.

Read More…

Inflexible customer solutions and business unit silos are the bane of any organization’s existence. So how does a large, multi-billion dollar insurance organization, with numerous lines of business, create a customer-centric business model while implementing configurable, agile systems for faster business transactions?

The solution is not so simple, but with our assistance, we’ve managed to point our large insurance client in the right direction. What began as a plan to develop a 360-degree customer profile and connect the disparate information silos between business units ultimately became the first step towards a more customer-centric organization.

A major multi-year initiative to modernize the organization’s mainframe systems onto the Microsoft technology platform will now provide significant cost savings over current systems and enable years of future business innovation. Read More…

Welcome to the first article in a series on moving enterprise systems from a mainframe-based platform to something else. That “something else” could be any number of things, but our default assumption (unless I say otherwise) is going to be a transaction processing system based on a platform like Microsoft’s .NET Framework.  While I expect that there will eventually be some technical content, the initial focus is going to be on planning, methodology, defining solutions and project management.  Moving a large system off of a legacy mainframe platform is very different from a development project that starts with a blank slate. Chances are that you’re going to be shifting paradigms for quite a few people in both your technical and business organizations — and for yourself, as well. Read More…