Overnight the world’s workforce has moved into the home office. As a result, online meetings are now the only way we meet. For many organizations, this sudden change has dramatically impacted how their business operates. Staff members were accustomed to collaborating in person. Now they feel disconnected.  Sales people who relied on face-to-face interaction to close the deal are suddenly isolated.  Visual cues, facial expressions and the non-verbal communication we took for granted is gone. Right now, across the globe, millions of professionals are facing these new communication anxieties.

Microsoft Teams has many features that will help overcome these challenges.  Team chats, channels and document collaboration can empower your online meetings.  These ten tips will help you ensure that all participants are comfortable with the technology so they can focus on the important part of the meeting – the people and content.

#1 – Set Expectations in the Meeting Invite

The meeting invite is a great opportunity to let people know what to expect by stating the purpose and agenda items. This will make the meeting meaningful and keep everyone on track. Will documents or other visuals be shared? Does your organization endorse the use of web cameras? Will notes or a recording be available for people to refer to after the meeting? Keep the message short but if there is anything you need to point out to ensure people can participate fully, the invite is a good opportunity to do that.

If you’re expecting newcomers, you might include a little extra guidance about accessing the meeting or where questions can be addressed. Although most people will not need this, including it sets a welcoming tone. For example:

New to using Microsoft Teams?

For the best experience:

  1. Click Join Microsoft Teams Meeting
  2. You may be prompted to choose how you want to connect – browser or desktop app
  3. Select audio option – computer or phone

If you need to join with only audio, you can use the phone number provided, or select another number through Local numbers.

If you’re dialing in, you can press *1 during the meeting to hear menu options.

If you need any help, click on Learn more about Teams or contact me at first.last@email.com

#2 – Review Meeting Controls with Newcomers

If you plan to have people join who are completely new to Teams, take a minute or two to review the meeting controls so people can participate comfortably. If you’re going to invite people to turn on their webcams, this is a great opportunity to illustrate the Start with a Blurred background option (see Tip #5).

Microsoft Teams App Bar Explained

  1. Meeting duration
  2. Webcam on or off
  3. Mute or unmute yourself
  4. Screen sharing
  5. More options
  6. Conversations
  7. Participant list
  8. Leave meeting

#3 – Mute is Everyone’s Friend

In meetings with more than 5 people, anyone who joins after the meeting has started will join as muted to reduce noise. If you notice disruptive noise from others, you can mute that person or mute all, easily from the participant list.

Mute People in Microsoft Teams or Mute All

If someone has been muted, they’ll get a notification letting them know. They can unmute themselves using their meeting controls when they need to speak. For those joining by calling in, *6 mute/unmutes.

#4 –Joining from Multiple Devices? Avoid Echoing.

Sometimes, people will join the meeting with their computer and then dial into the meeting with their phone for audio. To avoid an echo, just make sure your computer speaker is muted. There is an option to do this on the join screen prior to entering the meeting. If you forget, just turn off your computer sound. 

Mute or Blur Your Background in Microsoft Teams App

#5 – “Mute” Any Distracting Backgrounds

If you need to share your webcam but the background could be distracting, you can take advantage of the select Start video with blur in More Options. This blurs the background behind the person for the duration of the camera share.

#6 – Pick What to Share

Don’t want everyone to read your email when you share your desktop? You have multiple choices to share:

  • Your entire desktop or screen
  • A window or application
  • PowerPoint
  • Whiteboard

With application sharing, participants will only see the application or window that you choose to share. They will not see any other application or notifications you might receive.

#7 – Let a Coworker Control Your Screen

A coworker can request control when you are sharing your desktop so that he or she can control the screen and cursor. If you choose to share an application, like PowerPoint, rather than your desktop, control would be limited to the shared application. For security reasons, external participants cannot request control when you are sharing your desktop.

#8 – Take Notes in the Meeting

With Microsoft Teams, taking and sharing meetings notes is easy. Notes can be accessed from More Options and are available before, during, and after the meeting.

Taking Notes in a Microsoft Teams Meeting

#9 – Two ways to collaborate on documents

You can work on files together through a screen share, where one person types and the others talk. Or, you can upload the document to the meeting chat and allow multiple people to work on the document in real-time.

#10 – Take Advantage of the Resources Available

Here are some good articles from the Microsoft Blog on Remote Work and Teams Meetings:

Thanks to authors Tacy Holliday, Chris Miller, and Guy Schmidt for their contributions to this blog.

As an IT leader, you understand a successful cloud transformation positions IT as a business enabler, rather than a curator of infrastructure. Adopting the cloud is more than simply moving your on-prem instance to a provider’s servers and going on with business as usual. The flexibility, scalability, and security of the cloud allows businesses to deliver value in ways that were not dreamed of outside of science fiction novels in earlier generations. Cloud transformation is about the whole system – people, processes, data, and tools. When cloud transformation is done right, it’s a true game changer. Getting it right requires you to focus on your people, not just technology enabling them. Here are tips to help you do that.

Connect the Dots

Whether you’re focused on adopting the cloud, modernizing your systems, or getting more from your data, helping your business solve problems and overcome challenges are the driving forces. People will need to work differently to achieve your desired results. If you’ve tried to change your own habits – working out, reading more, going to bed earlier – you know that influencing human behavior isn’t easy. To help people navigate these changes and thrive, it’s especially important to connect these dots:

  • How the solution will help employees solve problems and overcome challenges they face in their day to day work.
  • What people will need to do differently and what support will be available to help them do that.

Start with the Home Team, But Don’t End There

The first place to start is with the IT teams. Whether the solution includes provisioning firewalls to migrate an on-prem intranet to SharePoint Online, modernizing millions of lines of COBOL code and migrating subsystems into Microsoft Azure, harnessing cloud-native services and DevOps practices, unleashing data intelligence through cloud-based outage tracking systems that incorporate Power BI, or automating workflows with Power Apps, people from different IT teams will need to work together to get the right solutions in place. This means communication and collaboration across IT teams, as well as within teams, is more important than ever.

Ensuring that all your business’ IT teams understand how they are an important part of the solution and ensuring they have access to the support they need to perform successfully are critical tasks. However, teams outside of IT are also likely to be impacted, whether it’s HR needing to update policies or documentation as a result of the new tools, or the entire company’s workforce using new communication and collaboration tools.

Ensure You Have a Complete Solution

Take a closer look at whose work will be impacted, what the areas of impact are, and the likely degree of impact. This will help you manage risk by ensuring you have a complete solution and that you can wisely deploy resources. If you have accelerated the deployment of cloud-based collaboration tools in the wake of the COVID-19 pandemic and are proceeding with immediate implementation of tooling, you can use this guidance to determine the gaps in a complete solution and what’s needed to close the loop. Here are three questions to help you identify the impact that your solution needs to address:

Step 1 – Whose day to day work is impacted?

  • IT Teams
  • Employees
  • Business Units
  • Customers
  • Other Stakeholders

Step 2 – What are the areas impacted?

  • Roles
  • Processes
  • Tools
  • Actions/Work behaviors
  • Mindsets/views

Step 3 – What is the level of impact on the day to day work?

  • Low – Small change in one or two areas
  • Medium – Medium change in one area or multiple areas impacted
  • High – Significant change in one or more areas or small change but significant consequences if the change is not adopted well

The greater the level of impact, the more important it is to have enough support in place. How much support is enough? To answer that, take a closer look at the likely obstacles, then put support in place to clear the path.

Anticipate Obstacles and Proactively Clear the Path

With the impact clear, it’s time to anticipate obstacles that will be faced as people adopt the new roles, processes, tools, actions/work behaviors or mindsets/views they need for successful results to be achieved.

For example, let’s say your company is migrating to a central repository and communication platform. Employees will benefit from a more seamless work experience across devices and be able to access on-demand resources, get answers to their questions, and resolve issues faster. Employees will need to know how to find the information they need in a timely manner, and they will need to know whether to use e-mail, instant messaging, or post to a discussion channel for their specific business scenarios.

Most obstacles fall into one of four categories:

  • Knowing: Do those impacted know what is changing and why they are an important part of the solution?
  • Caring: Do they care about the problems the new tool, system or processes will help solve?
  • Norming: Do they know what is expected of them? Does their leadership (and other influencers) demonstrate through consistent words and actions that this is important?
  • Performing: Can they do what is expected of them? How will they get feedback? Are incentives aligned with the desired performance? Are there any new challenges that they are likely to face and have these been accounted for?

A complete solution anticipates these challenges and proactively builds in support by considering the experiences people have and the support they need specific to the business scenarios they are engaged in on a regular basis.

Conclusion

Wherever you are on your cloud transformation journey, make sure you are considering the experiences and support that people need to have in order to successfully navigate changes in roles, processes, and tooling to thrive. The sooner people start to thrive, the sooner your company gets its ROI with business problems solved and challenges overcome. Ultimately, a complete cloud transformation solution must be tech-fueled, but people-focused.

When you think about your organization’s cloud strategy, mindset may not be one of the first things you think about, but mindset is crucial. Adopting a cloud mindset has been called the “single most important” predictor of cloud success (Lewis, 2017, p. 44). Why? Because it is key to aligning people, processes, technology, and culture necessary for cloud transformation to take place on an enterprise-scale rather than as a one-off project with limited scope.

What is a Cloud Mindset?

Mindset refers to your set of attitudes or ways of thinking. Carol Dweck (2006), Stanford professor and popular author, defines mindset as “the view you adopt for yourself” (p.6). Although a given mindset isn’t right or wrong, some mindsets are significantly more advantageous than others in a specific context.

Take for example a top-notch engineer who has developed a reputation for personally solving difficult problems. The engineer’s stellar individual contributor mindset has enabled him to make a difference for the business and achieve success. When that engineer is promoted to manager, it may be tempting for him to stay in the same mindset and to personally solve the problems his team now encounters. That mindset won’t allow him or his team to be successful in the long run. He will need to make the switch to a manager mindset, where he is focused on helping the team develop their own capacity to solve problems.

A cloud mindset has two key components:

  1. Willingness to rethink the role of technology and how it can be leveraged across the enterprise for strategic advantage and mission fulfillment.
  2. Willingness to rethink the value proposition across the organization, considering where alignments in people, processes, and culture are needed to deliver value more effectively and efficiently.

Rethinking Technology

Rethinking technology means moving from the view of technology from static resources to dynamic service, like the difference between a noun and verb, or between a concrete example and abstract formula. One example of this trend is de-emphasizing rigid architecture in favor of infrastructure-as-code (Chung & Rankin, 2017). Cloud is an enterprise capability/system delivering compute power where and when needed to help people and the business accomplish work, not an add-on service or outsourced data center (Wyckoff & Pilat, 2017).  Cloud supercharges the speed and agility of the business, allocating and reallocating resources nearly instantaneously. Through the cloud’s scalability, there is a tremendous opportunity to move to continuous improvement/continuous delivery and try new ways of working that deliver better value to end-users – customers and employees.

Rethinking Work

Just as rigid architecture can be rethought as code, a cloud mindset enables work to be reconceptualized as data transformation and stewardship. Examples of this may include:

  • creating a document
  • monitoring a network
  • setting permission levels
  • configuring a tenant
  • collecting credit card information to securely process a transaction
  • sending an email
  • constructing a building from blueprints
  • having a conversation with a coworker

Get the right data to the right place at the right time and with the right interface so it can be used by the worker, humans, and bots. Thinking in this way can help the business identify strengths, opportunities, and blockers that can be addressed, making work more productive, cost-effective, and potentially more meaningful.

Rethinking Value Delivery

Thinking of work as data transformation and stewardship opens new ways of considering how the business delivers and can deliver value. Delivering value is dependent on the ability to move data across the organization, and technology serves to increase flow or throughput.

A cloud mindset views customers and employees as important partners, seeking to understand their experiences and striving to make their experiences better by delivering the right data to the right people at the right time in a user-friendly way. Understanding what employees and customers perceive as valuable can help business leaders make the most informed decisions.

As decisions are made, there will be tradeoffs. For example, a company that moves payroll and talent management to a software-as-a-service (SaaS) will gain organizational agility but will have to trade a certain degree of customization based on the limitations of the SaaS. Because experiences are valued, the business will ensure support is in place to navigate the tradeoffs and changes.

Rethinking Silos

Silos in organizations have gotten a bad reputation. Silos enable a clear definition of workstreams, roles, and responsibilities and promote work being done by the subject matter experts. The key is to ensure the boundary is set up so that needed data can flow into and out of the silo for productive work. A cloud mindset thinks of silo boundaries as interfaces and intentionally designs them so that data that needs to move across the interface can be shared securely. The goal is to make the interface more user friendly so that the silo does not unnecessarily slow down the movement of data needed to deliver business value.

The proliferation of cross-functional teams is one way that businesses are trying to address this, although there are limitations. Cross-functional teams can help share data across functional silos, but often processes are created within a silo. This is where the view of silo boundaries as interfaces can be especially helpful. Mapping the steps, inputs, and outputs in a process or series of processes that span functional units is a good tool to identify where interface improvements are needed to improve data throughput. Service blueprints are another option. Service blueprints visualize different components of a service (e.g., people, resources, and physical/digital artifacts) that are tied to touchpoints on a customer’s or employee’s journey.

Rethinking Culture

Organizational culture is like the operating system of the organization and refers to the collective values, beliefs, attitudes, and behaviors that are active in the organization. Staying with the operating system metaphor, cloud transformation has trouble running in certain environments. Like with mindset, this does not mean that certain organizational cultures are better than others, but it does mean that in the context of cloud transformation, cultures can promote or hinder cloud transformation:

  • Where culture is aligned to a cloud mindset, then cloud transformation accelerates.
  • Where culture is not aligned to a cloud mindset, then there is friction.

Often, the effect is mixed, with some elements of culture aligned and others not (e.g., Fegahli, 2019). It is important to capitalize on the strengths of the current culture while overcoming friction that can stunt cloud transformation if left unaddressed through effective change management. The goal of cloud transformation is to help your organization be its best, leveraging the cloud to do so. The goal is not to turn your organization into a copy of another organization or another organization’s culture. With that said, helping the organization be receptive to and successful in cloud transformation requires addressing culture.

Configuring Your Mindset

Our experience working with federal and commercial clients and the research on successful cloud adoption points to the following settings as optimal for configuring a cloud mindset:

Setting Item
On Switch Start with the expectation to learn, grow, and iterate
Off Switch Wait to figure out all the details before starting
On Switch Views cloud as an enterprise capability/system delivering compute power where and when needed to help people and the business accomplish work
Off Switch Views cloud as an add-on service or outsourced data center
On Switch Understands that work is ultimately data stewardship/transformation
On Switch Focused on getting the right data to the right place at the right time and with the right interface so it can be used by the worker, humans, and bots
Off Switch Believes that new tools and a little training are all people need to make the transition
On Switch Knows customers and employees are important partners, values their experiences, and strives to make their experiences better
Off Switch Thinks that cloud technology should not impact the organization’s culture
On Switch Thinks that helping culture better align with delivering business value better is a key part of cloud transformation

Cloud transformation reaches all areas of the business. This includes upgrading and syncing legacy systems as well as aligning organizational structure, processes, people, culture, and leadership to unleash the benefits of the cloud at an enterprise scale. Although this is not as straightforward as configuring a tenant, it is worth it. Successful cloud transformation starts with adopting a cloud mindset and then helping the other pieces align.

Want to learn more about managing changes associated with cloud transformation? Stay tuned for my next post on the people side of transformation.

References:

  • Chung, J. & Rankin (2017). How to manage organizational change and cultural impact during cloud transformation. SlideShare presentation.
  • Dweck, C. S. (2008). Mindset: The new psychology of success. Random House Digital, Inc.
  • Feghali, R. (2019). The “Microservic-ing” of Culture. CHIPS Magazine.
  • Lewis (2017). Cloud success is about changing your mindset. NZ Business + Management. 31(6), 44-45.
  • Wyckoff A, Pilat D. Key Issues for Digital Transformation in the G20.; 2017. https://www.oecd.org/internet/key-issues-for-digital-transformation-in-the-g20.pdf.
In January, AIS’ Steve Suing posted a great article titled “What to Know When Purchasing a COTS Product and Moving to the Cloud.” In his post, Steve discusses some things to consider such as security, automation, and integration. Springboarding off of those topics for this blog, I am going to discuss another major consideration: legacy data.

Many COTS implementations are done to modernize a business function that is currently being performed in a legacy application. There are some obvious considerations here. For example, when modernization a legacy app, you might ask:

  • How much of the business needs are met by purely out of the out-of-the-box functionality?
  • How much effort will be necessary to meet the rest of your requirements in the form of configuration and customization of the product or through changing your business process?

However, legacy data is a critical consideration that should not be overlooked during this process.

Dealing with Data When Retiring a Legacy System

If one of the motivators for moving to a new system is to realize cost savings by shutting down legacy systems, that means the legacy data likely needs to be migrated. If there is a considerable amount of data and length of legacy history, the effort involved in bringing that data into the new product should be carefully considered. This might seem trivial, with most products at least proclaiming to have a multitude of ways to connect to other systems but recent experience has shown me that the cost of not properly verifying the level of effort can far exceed the savings provided by a COTS product. These types of questions can help avoid a nightmare scenario:

  1. How will the legacy data be brought into the product?
  2. How much transformation of data is necessary, and can it be done with tools that already exist?
  3. Does the product team have significant experience of similarly legacy systems (complexity, amount of data, industry space, etc.) moving to the product?

The first and second conversations usually start by sitting down and talking to the product team and having them outline the common processes that take place to bring legacy data into the product. During these conversations, be sure both sides have technical folks in the room. This will ensure the conversations have the necessary depth to uncover how the process works.

Once your team has a good understanding of the migration methods recommended by the product team, start exploring some actual use cases related to your data. After exploring some of the common scenarios to get a handle on the process, jump quickly into some of the toughest use cases. Sure, the product might be able to bring in 80% of the data with a simple extract, transform, load (ETL), but what about that other 20%? Often legacy systems were created in a time long, long ago, and sometimes functionality will have been squished into them in ways that make the data messy. Also, consider how the business processes have changed over time and how the migration will address legacy data created as rules changed over the history of the system. Don’t be afraid to look at that messy data and ask the tough questions about it. This is key.

Your Stakeholders During Legacy Data Migration

Be sure those involved are not too invested in the product to decrease confirmation bias, the tendency to search for, interpret, and/or focus on information that will confirm that this is your silver bullet. This way they will ask the hard questions. Bring in a good mix of technical and business thought leaders. For instance, challenge the DBA’s and SME’s to work together to come up with things that will simply not work. Be sure to set up an environment in which people are rewarded for coming up with blockers and not demotivated by being seen as difficult. Remember every blocker you come up with in this early phase, could be the key to making better decisions, and likely have a major downstream impact.

The product of these sessions should be a list of tough use cases. Now it’s time to bring the product team again. Throw the use cases up on a whiteboard and see how the product team works through them. On your side of the table, be sure to include people that will be responsible for the effort of bringing the new system to life. With skin in the game, these people are much less likely to allow problems to be glossed over and drive a truly realistic conversation. Because this involves the tough questions, the exercise will likely take multiple sessions. Resist any pressure to get this done quickly, keeping in mind that a poor decision now, can have ripple efforts that will last for years.

After some of these sessions, both sides should have a good understanding of the legacy system, the target product, and some of the complexities of meshing the two. With that in place, ask the product team for examples of similar legacy systems they have tackled. This question should be asked up front, but it really cannot be answered intelligently until at least some of the edge cases of the legacy system are well understood. While the product team might not be able to share details due to non-disclosure agreements, they should be able to speak in specific enough terms to demonstrate experience. Including those involved in the edge case discovery sessions is a must. With the familiarity gained during those sessions, those are the people that are going to be most likely to make a good call on whether the experience being presented by the product team relates closely enough to your needs.

Using Lessons Learned to Inform Future Data Migration Projects

Have I seen this process work? Honestly, the answer is no. My hope is that the information I have presented, based on lessons learned from projects past helps others avoid some of the pitfalls I have faced and the ripple effects that could lead to huge cost overruns. It’s easy to lose sight of the importance of asking hard questions up front and continuing to do so in light of the pressure to get started. If you feel you are giving in to such pressure, just reach out to future you and ask them for some input.

And a final thought. As everyone knows, for complex business processes, COTS products don’t always perfectly align to your legacy system. There are two ways to find alignment. Customizing the system is the obvious route, although it often makes more sense to change business processes to align with a product you are investing in rather than trying to force a product to do what it was not designed for. If you find that this is not possible, perhaps you should be looking at another product. Keep in mind that if a product cannot be found that fits your business processes closely enough, it might make more financial sense to consider creating your own system from the ground up.

Power Platform is Microsoft’s newest pillar in its cloud platform stack, steadily growing in popularity as more and more organizations realize the capability of the tool. Put into the hands of experienced developers, the Platform can expedite the development of highly complex organizational applications. Provided to citizen developers, the Platform connects them to organizational data and allows them to automate personal and team workloads. At an organizational level, IT has insight into who is doing what where and for whom – and the ability to secure, manage, and govern it. It is a first-class platform.

So, why has it taken so long to be recognized as such? Part of the problem is a misunderstanding around what “low code” means.

When you hear the phrase “low code,” you shouldn’t be thinking “low thought” or “low quality.” Rather, think of low code as short-hand coding. The team at Microsoft has done a large chunk of the pre-work by setting up your Azure database (that’s right – the Power Platform is built on Azure!) so that you can focus more on the parts of your solution that are unique to your organization and less about the base architecture that is consistent between solutions.

The Platform offers several drag-and-drop features, many templates, and hundreds of premade data connectors you can utilize to get apps into minimum viable product (MVP) shape quickly. It also lets you use traditional coding to beef up these apps, allowing highly customizable experiences designed to fit your precise business use case. It democratizes the ability to create apps by having both ends of the spectrum covered, allowing a community to exist globally (and organizationally) between citizen developers who just want to make their jobs a little easier and professional developers whose full-time job is building.

It is this unique feature – the diverse community the Platform is capable of supporting – that really changes the game. The Platform is allowing organizations to rapidly scale their modernization efforts by leveraging resources both in and beyond their IT shops.

The base of most Power Platform solutions is built within the Common Data Service (CDS) – a data model that shapes your model-driven apps, is fed by your canvas apps, and provides a wealth of information to reports published with Power BI. Organizations can deploy multiple solutions to an environment’s CDS, meaning that you can package up a “base solution” – a solution that sets up the common entities that make up the core of your enterprise – and deploy it to all of your Platform environments as a starting point for your developers. Then each app can be packaged up as its own solution and stacked on top of this base, modifying it as needed and even furthering the expediency at which a new solution can be deployed.

The CDS not only contains the data model but also stores the data itself. That means that when you build multiple apps in one environment, you’re creating a single source of truth for all of the applications touching that CDS; pull out processes and data points specific to one business unit but allow them to share the data with other units in the background. Connect your CDS-built apps to other data sources (Azure SQL database, Outlook, or even competing software like SalesForce) using the hundreds of existing data connectors, or create custom connectors, to further utilize existing data and reduce duplicative sources (and the source control issues that go along with it).

Common Data Service

Expanding on the need for control of your organization’s data, the Power Platform offers a familiar experience for the system administrators of many organizations by using Azure Active Directory to authenticate users and the O365 Admin Center to manage licenses. Further, the management of the Platform is fortified by the installation and use of the (free!) Center of Excellence Starter Kit. This toolkit provides a starting point to gain organizational insight into the details of how the Platform is being used – locations, environments, flows, makers, connections, apps, and more can be administered with the kit. Drill into each application, tag it, track it, and monitor its usage. Customize the COE Starter Kit and bring a whole new light to what once might have been considered “rogue IT.”

So now that we understand that low code is a good thing, let’s recap what you’re getting with the Power Platform:

  • A way to maximize your resources – the low code features enable citizen developers to migrate their own work, and the pre-built features and “short-hand” coding allow your professional developers to focus on the most important and unique features of your organization’s workloads.
  • A system meant to scale – built on Azure and backed by Microsoft SLAs, this tool is meant for enterprise use and has the speed and reliability to back it. Enable your citizen developers to modernize their own workloads to speed up your cloud overhaul. Develop & leverage a custom, organization-wide base solution to start every new project with a bulk of known entities already installed and scale your implementations even faster.
  • A new level of visibility into your business – the Center of Excellence toolkit allows you to see all of the makers who are building, the apps they’re developing, the connections being made, and more, providing an entirely new window into (and set of control over) citizen development that might have once been seen as “rogue IT.”

To paraphrase Andrew Welch, Director of Business Applications at AIS and Principal Author of the PPAF, on the subject: some business challenges can be solved with custom software development. Some business challenges can be solved with COTS solutions. Everything else is a missed opportunity to transform and modernize in the cloud – for which low-code cloud transformation with the Power Platform is the answer.

AIS is a Microsoft Partner, the premier developer of customer Power Platform solutions, and the publisher of the Power Platform Adoption Framework.

READY TO START YOUR JOURNEY?
AIS can help you when beginning your enterprise adoption journey with Power Platform.

During a recent design session at a client site, our team had the opportunity to participate in something cool. We had the opportunity to create a custom DSL (Domain Specific Language) in YAML for an automation framework we wanted to build. This blog will help provide a high-level overview. The client wants to use Azure Automation to create a self-healing framework for their infrastructure. The environment is a mixture of both PaaS and IaaS, with certain VMs being used to host app and database servers. Unfortunately, because of potential performance issues, it is important to regularly perform healing actions on the environment, which so far is being done manually. Now you may be asking, “Why don’t they use something like Chef or Puppet?”. It is because these technologies were either in the process of being on-boarded or were not available. You may also ask, “Well, if you’re going to use Powershell, just use Powershell DSC!”. While I agree that Powershell DSC is powerful and could potentially power the engine for this application (explained below), DSC itself is not very user-friendly. One of the major factors for this session was to allow any user to edit or read the definitions, and YAML is a much more robust option for usability.

The design session itself was fascinating. I had never spent time creating my own configuration language complete with definitions and structure. It was a great learning experience! I had used YAML briefly when demoing out things like Ansible for personal projects, but never directly to solve an issue. YAML had always been another markup language for me. This usage of it, however, showed me the power of the language itself. The problem that we ended up solving with the below snippet was creating an abstraction layer for Azure Monitor Alerts.

After the session completed, we were left with something like this:
Create and Abstraction Layer

Now what?

We must take this YAML File and convert it into an Azure Monitor Alert. The first problem we ran into is that this file is a YAML File. How can I take these configuration values and convert them to be used in Powershell? Here is where Cloudbase’s powershell-yaml module comes into play. As we all know, Powershell was written on top of and created to be an extension for the .NET Framework, so people at Cloudbase created the wonderful powershell-yaml, a module that is a wrapper around the popular .Net Library YamlDotNet.

With powershell-yaml you can create something like this:
Converting the yaml File

In this example, I am not only converting the yaml file into a JSON file as an example, but I am returning the yaml as a PSObject. I find this much easier to use because of the ease of dot notation.

In this case, you can now reference variables such as this:
Corresponding Powershell cmdlets

I was able to follow up with the corresponding PowerShell cmdlets available in the Az Modules and programmatically create Azure Monitor Alerts using YAML.

The eventual implementation will look roughly like this:
Azure Alerts

Each Azure Alert will trigger a webhook that is received by the Engine that is running in Azure Automation. This Engine will then do all business logic to find out whether the piece of infrastructure in question needs healing. If all the previously defined reasons are true the automation runbook will perform the recovery action.

The exercise itself was eye-opening. It made me much more comfortable with the idea of designing a solution for our specific scenario rather than trying to find out and wait to find the product or framework that would solve the problem for us. Also, this PoC showed me the power of YAML and how it can turn something incredibly monotonous, like configuration values, into something that can be part of your robust solution.

Azure DevOps provides a suite of tools that your development team needs to plan, create, and ship products. It comes in two flavors:

  • Azure DevOps Services – the SaaS option hosted by Microsoft.
  • Azure DevOps Server – the IaaS option hosted by you.

When comparing the two to decide which option enables your team to deliver the most value in the least amount of time, Azure DevOps Services is the clear winner, but velocity alone is not the only consideration for most government teams. The services you use must also be compliant with standardized government-wide security, authorization, and monitoring requirements.

Azure DevOps Services are hosted by Microsoft in Azure regions. At the time of this writing, you do not yet have the option to host Azure DevOps in an Azure Government region so you must use one of the available Azure Commercial regions. As a public service, it has also has not yet achieved compliance with any FedRAMP or DoD CC SRG audit scopes. This may sound like a non-starter, but as it states on the FedRAMP website, it depends on your context and how you use the product.

Depending on the services being offered, the third-party vendor does not necessarily have to be FedRAMP compliant, but there are security controls you must make sure they adhere to. If there is a connection to the third-party vendor, they should be listed in the System Security Plan in the Interconnection Table.

This is the first of two blog posts that will share solutions to common Azure DevOps Services concerns:

  1. In “Azure DevOps Services for Government: Access Control” (this post), I will cover common access control concerns.
  2. In “Azure DevOps Services for Government: Information Storage”, I will address common concerns with storing information in the commercial data centers that host Azure DevOps Services.

Managing User Access

All Azure DevOps Organizations support cloud authentication through either Microsoft accounts (MSA) or Azure Active Directory (AAD) accounts. If you’re using an MSA backed Azure DevOps organization, users will create their own accounts and will self-manage security settings, like multi-factor authentication. It is common for government projects to require a more centralized oversight of account management and policies.

The solution is to back your Azure DevOps Services Organization with an AAD tenant. An AAD backed Azure DevOps organization meets the following common authentication and user management requirements:

  • Administrators control the lifecycle of user accounts, not the users themselves. Administrators can centrally create, disable, or delete user accounts.
  • With AAD Conditional Access, administrators can create policies that allow or deny access to Azure DevOps based on conditions such as user IP location.
  • AAD can be configured to federate with an internal identity provider, which could be used to enable CAC authentication.

Controlling Production Deployment Pipelines

The value added by Continuous Delivery (CD) pipelines includes increasing team velocity and reducing the risks associated with introducing a change. To fully realize these benefits, you’ll want to design your pipeline to begin at the source code and end in production so that you can eliminate bottlenecks and ensure a predictable deployment outcome across your test, staging, and production environments.
It is common for teams to grant limited production access to privileged administrators, so some customers raise valid concerns with the idea of a production deployment pipeline when they realize that an action triggered by a non-privileged developer could initiate a process that ultimately changes production resources (ex: deployment pipeline triggered from a code change).

The solution is to properly configure your Azure DevOps pipeline Environments. Each environment in Azure DevOps represents a target environment of a deployment pipeline. Environment management is separate from pipeline configuration which creates a separation of duties:

  • Team members who define what a deployment needs to do to deploy an application.
  • Team members who control the flow of changes into environments.

Example

A developer team member has configured a pipeline that deploys a Human Resource application. The pipeline is called “HR Service”. You can see in the YAML code below, the developer intends to run the Deploy.ps1 scripts on the Production pipeline Environment.

Production Environment

If we review the Production environment configuration, we can see that an approval check has been configured. When the deployment reaches the stage that will attempt to run against the production environment, a member of the “Privileged Administrators” AAD group will be notified that deployment is awaiting their approval.

Approvals and Checks

Only the Privileged Administrators group has been given access to administer the pipeline environment, so the team member awaiting approval would not be able to bypass the approval step by disabling it in the environment configuration or in the pipelines YAML definition.

Security Production

By layering environment configuration with other strategies you’ll establish the boundaries needed to protect your environment but will also empower your development team to work autonomously and without unnecessary bottlenecks. Other strategies to layer include:

  • Governance enforces with Azure Policies
  • Deployment gates based on environment monitoring
  • Automated quality and security scans into your pipeline

It is also important for all team members involved with a release to be aware of what is changing so that you are not just automating a release over the wall of confusion. This is an example of how a product alone, such as Azure DevOps, is not enough to fully adopt DevOps. You must also address the process and people.

Deployment Pipeline Access to Private Infrastructure

How will Azure DevOps, a service on a commercial Azure region, be able to “see” my private infrastructure hosted in Azure Government? That’s usually one of the first questions I hear when discussing production deployment pipelines. Microsoft has a straight-forward answer to this scenario: self-hosted pipeline agents. Azure DevOps will have no line of sight to your infrastructure. The high-level process to follow when deploying an agent into your environment looks like this:

  • Deploy a virtual machine or container into your private network.
  • Apply any baseline configuration to the machine, such as those defined in the DISA’s Security Technical Implementation Guides (STIG).
  • Install the pipeline agent software on the machine and register the agent with an agent pool in Azure DevOps.
  • Authorize pipelines to use the agent pool to run deployments.

With this configuration, a pipeline job queued in Azure DevOps will now be retrieved by the pipeline agent over 443, pulled into the private network, and then executed.

Azure DevOps Configuration

Conclusion

In this post, I’ve introduced you to several practices you can use to develop applications faster without sacrificing security. Stay tuned for the next post in this series where we will discuss common concerns from Government clients around the storage of data within Azure DevOps.