While checking out one of the automated messengers a coworker created, we had an idea. Why not use Azure to help with daily tasks or streamline routine daily tasks? The logic apps listed here take about 15-20 minutes at most to create and go from easiest to hardest to setup. Listed below is what you will need for the app before listing the steps. Keep in mind that while Logic Apps are available on Azure Gov, you might need to talk with a supervisor before implementing these there.

Completely new to Logic Apps? Click here to create your first one!

SharePoint Item Tracker

What you’ll need:

  • Office 365 account, with Teams, enabled.
  • SharePoint access to the desired list
  • The Completed App looks like:

Completed App


  1. Start with a blank Logic App.
  2. Select the SharePoint Trigger “When an item is created”. This will require you to sign in with your Office 365 credentials.
  3. You’ll see a box like this. Rename the title by clicking the three dots in the top right corner.
    When An Item is Created
  4. Select the Site Address of the SharePoint site with the desired list. The dropdown will be populated with all the available sites on the SharePoint domain. Select the List Name from the dropdown of SharePoint lists. If the list you’re looking for isn’t there, it might be hiding on a different site. Set the Interval and Frequency to “1” and “Day”, respectively.
  5. Add a new action to the Logic App and find the action “Post your own adaptive card as the Flow bot to a channel” (As of this writing, this action is still in preview). After signing in again to your Office 365 account, you’ll see this box. (Again, the dots in the top right corner will allow you to change the name)
    Notify the Channel
  6. Add the Team ID and Channel ID. These should correspond with where you want to send the notification to.
  7. In the Message Field, you can add the message to send to the channel. You also have the option of adding Dynamic Content, which can add the Name of the item, a link to the Item, or other various properties.
  8. Hit Save in the top left to save the Logic App.

Twitter Tracker

What you’ll need:

  • Office 365 account
  • Twitter Account
  • The Completed App looks like:

Twitter Tracker


  1. From the empty Logic App, scroll down to find the “Email yourself about new Tweets about a specific keyword via Office365”
  2. Click “Use this template”
  3. You’ll be asked to connect to the following: Twitter, Office 365 Users, and Office 365 Outlook.
  4. Click continue and include the desired keyword.
  5. Save the logic app by clicking the Save button in the upper right corner.

Event Time-boxer

This is a useful logic app to keep two different calendars updated with each other.
What you’ll need: 

  • Office 365 account
  • The email address to forward events to

The Completed App looks like:

Event Time-Boxer


  1. From the templates page, select “Empty Logic App”
  2. In the search box type “event is created”, and scroll down to select the one from Outlook.com
  3. You’ll be prompted to connect an Outlook account to the Logic App.
  4. Select the preferred calendar to check for events and set the Interval to 1 dayWhen a New Event is Created
  5. Add a new step, and in the search box, type “Condition”, and scroll down to Control “Condition”Check if Event is New
  6. Under the condition, choose the Subject from the dynamic options. For the condition dropdown, select Contains, and in the right text field, type in “Timebox”Timebox
  7. Under the “false” condition, add a new step. Look for the “Create Event” action from Outlook.com. You might be told to authenticate again.
  8. Select a Calendar for the event and make the End time and Start time the same from the trigger event. The subject should be “Timebox:” followed by the subject. This way, we won’t be triggering the event again, because of our previous condition.
  9. Add the required attendee’s field and include the email address you want to forward events to.Create Event V2
  10. Save the logic app.

Honorable Mention: Traffic Sensor

So, there is one more use for Logic Apps that we didn’t cover. Microsoft has a cool tutorial for creating a Traffic Checker, which checks the traffic in the morning and sends an email based on the result. You can find it here.

There are a ton of different connectors for Logic Apps and, so there are more ideas out there than the ones listed.

It’s been a transformational year at AIS. We worked on some incredible projects with great clients, partners, and co-workers. We learned a lot! And we enjoyed telling you all about it here on the AIS Blog.

As we close out the year, here are the top 10 most read and shared posts of 2019*:

  1. Federated Authentication with a SAML Identity Provider, by Selvi Kalaiselvi
  2. Newly Released JSON Function for Power Apps, by Yash Agarwal
  3. So, You Want to Rehost On-Premises Applications in the Cloud? Here’s How., by Nasir Mirza
  4. Highly-Available Oracle DB on Azure IaaS, by Ben Brouse
  5. The New Windows Terminal – Install, Interact, and Customize, by Clint Richardson
  6. Cloud Transformation Can Wait… Get Me to the Cloud Now!, by Vishwas Lele
  7. HOW TO: Create an Exact Age Field in Microsoft PowerApps and Dynamics, by Chris Hettinger
  8. SharePoint Framework (SPFx) Innovation Project Part I, by Nisha Patel, Elaine Krause, and Selvi Kalaiselvi
  9. Azure Sentinel: A Tip of the Microsoft Security Iceberg, by Benyamin Famili
  10. What Is API Management?, by Udaiveer Virk

Happy New Year to all our readers and bloggers! Be sure to follow AIS on Twitter, Facebook or LinkedIn so you’ll never miss an insight. Perhaps you’ll even consider joining our team in 2020?

*We feature each of our bloggers once on the top 10 list, but we had a few top posts we would be remiss not to mention, including another blog from Yash Agarwal on How To Use Galleries in Power Apps and two more posts from Vishwas Lele, Oracle Database as a Service on Azure (Almost!) and Traffic Routing in Kubernetes via Istio and Envoy Proxy. Enjoy!

K.C. Jones-Evans, a User Experience Developer, Josh Lathery, a Full Stack Developer, and Sara Darrah, a User Experience Specialist, sat down recently to talk through our design and project development planning process that we implemented for part of a project. This exercise was to help improve our overall project development planning and create best practices moving forward. We coined the term the “Design Huddle” to describe the process of taking the feature from a high level (often one sentence request from our customer) to a working product in our software, improving project planning for software development.

We started the design huddle because the contract we were working on already had a software process that did not include User Experience (UX). We knew we needed to include UX, but weren’t sure how it would work given the fast paced (2-week sprint cycles) software process we were contractually obligated to follow. We needed to come up with a software design planning solution that allowed us to work efficiently and cohesively. The Design Huddle allowed us to do just that.

What does the design huddle mean to you?

Josh: Previously the technical lead would have all the design processes worked out prior to being assigned a ticket for development. The huddle meant that I had more ownership in the feature upfront. It was nice to understand via the design process what the product would be used for and why.

K.C.: The huddle was an opportunity to get our thoughts together on the full product before diving into development right away. In the past, we have developed too quickly and discovered major issues. Development early equated to too much ownership in the code, so changes were painful when something needed to be corrected.

Sara: The huddle for me meant the opportunity to meet with the developers early to get on the same page prior to development. That way when we had the final product, we could discuss details and make changes, but we were coming from the same starting point. I have been on other projects where I’m not brought in until after the development is finished- which

immediately strains the relationship between UX and Development due to big changes needed to finished code.

What does the design huddle look like?

  • The Team: UX Designer, at least one front-end or UX developer, a full-stack developer, the software tester, a graphic designer (as needed), and a Subject Matter Experts (as needed)
  • The Meeting: This took time to work out. As with any group, sometimes there were louder or more passionate individuals that seemed to overshadow the rest. At the end of the day the group worked better with order and consensus:
    • Agendas were key: The UX lead created the agendas for our meetings. Without an agenda it was too easy to go down a rabbit hole of code details. This also helped folks who were spread across multiple projects focus on the task at hand faster. We included time to report on action items, old business/review of any design items and set the stage of what you hope to cover as new design work.
    • Action Items: Create and assign actions to maintain in the task management system (JIRA). This was a good translation for developers and helped everyone understand their responsibility leaving the room. These also really helped with sprint planning and the ability to scope tasking.
    • The facilitator had to be assertive: Yes, we are all professionals and in the ideal world we could “King Arthur Round Table” these huddles. But the few times we tried this meeting were quickly off-track and down a rabbit hole. Many teammates would leave frustrated and feeling like we hadn’t made any progress. The facilitator was the UX specialist for our meetings, but we think the owner of the feature should facilitate. The facilitator needs to be willing to assert themselves during conversations, keep the meeting on track and force topics to be tabled for another time when needed.
    • Include everyone and know the crowd: The facilitator needs to quickly understand the team they are working with to figure out how to include everyone. One way we ensured this happen was to do an around the room at the end of each meeting.
    • Visual Aid that the whole team can see during the meeting: Choose the tool that works best for the topic at hand- a dry erase board, a wireframe, JIRA tickets, or a mock-up can help people stay on track and ensure common understanding.
    • Table tough items and know when to end the meeting: sometimes in the larger meetings we needed to call it quits on a debate to give more time for thinking, research, and discussions. Any member of the team could ask for something to be tabled. A tabled item was given a smaller group of individuals to work through the details in between the regular design huddle sessions.
    • Choose the team participants wisely: The first few meetings most likely will involve the entire team, but smaller huddles (team of 2 or 3) can often work through details of tabled items more efficiently.

What is the benefit?

  • Everyone felt ownership in the product by the end of the design. Sara’s favorite thing about this learning experience was when one of the developers I hadn’t worked with before said he loved the process. He had the opportunity to provide input early and often, and then by the time development started there weren’t really any questions left on how to implement. Josh- the process helped me feel like an engineer and not just a code monkey. K.C.-: In other projects, we have been handed mock-ups without context. This process helped the “gray area” be taken away.
  • Developers were able to tame the ideals of the UX designer by understanding the system, not just the User Interface. Developers could assist UX by asking questions, helping understand the existing system limitations, and raising concerns that the design solution was too complicated given the time we had.
  • The software tester was able to understand the flow of the new designs, ask questions and assist in writing acceptance criteria that was specific, measurable, attainable, realistic and testable (SMART).
  • She provided guidance when we needed to go from concept to reality and ensured we understood the design requirements.
  • It was critical to designing 1-2 sprints in front of expected development as part of Agile software development. It allowed for the best design to be used rather than forcing ourselves into something that could be completed in two weeks. Together we would know what our end goal was, then break down the concept into tiers. Each tier would have a viable product that was one sprint long, and always kept the end product design in mind.

The Design Huddle is a way for teams to collaborate early on a new feature or application. We feel it is a great way to work User Experience into the Agile software process and simplify project development planning. We have taken our lessons learned and applied to a new project the three of us are getting to tackle together and expanded the concept of the huddle to different members on the team. If you are struggling to incorporate proper design or feel frustration from teammates on an application task, this may be the software design planning solution for you!

Microsoft HQ in RemondAfter much anticipation, the US Department of Defense (DoD) has awarded the $10 billion Joint Enterprise Defense Infrastructure (JEDI) contract for cloud computing services to Microsoft over Amazon. This effort is crucial to the Pentagon’s efforts to modernize core technology and improve networking capabilities, and the decision on which cloud provider was the best fit was not something taken lightly.

Current military operations run on software systems and hardware from the 80s and 90s, and the DoD has been dedicated to moving forward with connecting systems, streamlining operations, and enabling emerging technologies through cloud adoption.

Microsoft has always been heavy to invest back into their products, the leading reason we went all-in on our partnership, strengthening our capabilities and participating in numerous Microsoft programs since the inception of the partner program in 1994.

In our experience, one of the many differentiators for Microsoft Azure is its global networking capabilities. Azure’s global footprint is so vast that it includes 100K+ miles of fiber and subsea cables, and 130 edge locations connecting over 50 regions worldwide. That’s more regions across the world than AWS and Google combined. As networking is a vital capability for the DoD, they’re investing heavily in connecting their bases and improving networking speeds, data sharing, and operational efficiencies, all without sparing security and compliance.

Pioneering Cloud Adoption in the DoD

We are fortunate enough to have been on the front lines of Azure from the very beginning. AIS has been working with Azure since it was started in pre-release under the code name Red Dog in 2008. We have been a leading partner in helping organizations adopt Azure since it officially came to market in 2010, with the privilege of experience in numerous large, complex projects across highly-regulated commercial and federal enterprises ever since.

When Azure Government came along for pre-release in the summer of 2014, AIS was among the few partners invited to participate and led all partners with the most client consumption. As the first partner to successfully support Azure Gov IL5 DISA Cloud Access Point (CAP) Connectivity and ATO for the DoD, we’ve taken our experience and developed a reliable framework to help federal clients connect to the DISA CAP and expedite the Authority to Operate (ATO) process.

We have led important early adoption projects to show the path forward with Azure Government in DoD, including the US Air Force, US Army EITaaS, Army Futures Command, and Office Under Secretary of Defense for Policy. Our experiences have allowed us to show proven success moving DoD customers IMPACT Level 2, 4, 5, and (soon) 6 workloads to the cloud quickly and thoroughly with AIS’ DoD Cloud Adoption Framework.

To enable faster cloud adoption and native cloud development, AIS pioneered DevSecOps and built Azure Blueprints to help automate achieving federal regulation compliance and ATO. We were also the first to achieve the Trusted Internet Connections (TIC) and DoD Cyber Security Service Provider (CSSP), among others.

AIS continues to spearhead the development of processes, best practices, and standards across cloud adoption, modernization, and data & AI. It’s an exceptionally exciting time to be a Microsoft partner, and we are fortunate enough to be at the tip of the spear alongside the Microsoft product teams and enterprises leading the charge in cloud transformation.

Join Our Growing Team

We will continue to train, mentor, and support passionate cloud-hungry developers and engineers to help us face this massive opportunity and further the mission of the DoD.


Rehosting Considerations

What is Rehosting?

Rehosting is an approach to migrating business applications hosted on-premises in data center environments to the cloud by moving the application “as-is,” with little to no changes to the functionality. A common rehosting scenario is the migration of applications that were initially developed for an on-premises environment to take advantage of cloud computing benefits. These benefits may include increased availability, faster networking speeds, reduced technical debt, and a pay-per-use cost structure.

In our experience, rehosting is well suited for organizations under time-sensitive data center evacuation mandates, facing pressure from leadership to migrate, running COTS software that doesn’t support modernization, or those with business-critical applications on end-of-support technologies. These organizations often opt for a rehost then transform strategy, as reviewed in the following blog, Cloud Transformation Can Wait… Get Me to the Cloud Now!

Below we outline important considerations, benefits, and challenges associated with a rehost strategy, based on our real-world experiences moving both custom and packaged commercial on-premises applications to the cloud. We’ll also discuss steps to take when your migration initiative goes beyond rehosting, requiring the assessment of alternative migration approaches, such as re-platforming and refactoring.

Critical Considerations

When moving on-premises business-critical applications to the cloud, there are critical considerations that span technical, operational, and business domains. Below are three key components not to be overlooked when defining your cloud migration strategy:

  • Establishing a shared vision: Ensuring you have set goals and an executive sponsor.
  • Understanding your why: Why are you migrating to the cloud in the first place?
  • Defined business impact: What impact do you expect from your migration efforts and are your goals realistic based on the chosen approach?

Establishing a Shared Vision

Establish a Shared Vision with Stakeholders

The landscape of on-premises systems is often governed by many stakeholders, both business and IT, with competing goals, risk profiles, and expected outcomes from a migration effort. Having a clear vision for your rehost initiative with key roles and responsibilities defined is critical to the timeliness, investment, and overall success of your project. Finding an executive sponsor to unite the various groups, make decisions, and define the business goals and expected outcomes is vital in risk management.

As part of creating this shared vision, the executive sponsor needs to ensure:

  • Goal Alignment: Having a shared vision among various business and IT stakeholders set direction and expectations for the project. A shared vision allows all parties, including vendors and internal resources, to understand the goal(s) and the role they’ll play for the project.
  • Sufficient Budgeting and Resource Allocation: Appropriate budget and resources must be allocated for executing tasks related to this partnership, before the start of the migration effort to ensure timely project completion.
  • Proper Documentation of Existing Systems: Critical information about on-premises systems and operations is often either insufficient or missing entirely. System documentation is mandatory to migrate systems and uphold their intended purpose.
  • Product Ownership: On-premises business application suites are often acquired or internally developed. Original vendors may be out of business, so products are no longer viable. Conversely, the custom product may no longer be supported or understood due to missing source code. An owner needs to be designated to determine the future of the product.
  • Organizational Change Management: Without user adoption, your cloud migration will fail. Change management efforts enable cloud technology adoption and require proper planning and execution from the start.

The considerations outlined above should be discussed up front, and partnerships among stakeholder groups must be established to accomplish the intended goal(s) of migration under executive sponsor leadership.

Understand Your Why

Understand Why You're Moving

You’ve heard the stories about failed cloud migrations. Those stories often start with misaligned expectations and a rushed execution, which are among the top reasons cloud migrations result in a backslide to on-premises environments. Migrating to the cloud isn’t a silver bullet – not every organization will experience cost savings or even immediate functionality improvements from a rehosting project but there are many opportunities for cost avoidance and optimization of spend.

As an IT director or manager, it’s critical to ensure executive-level sponsors understand how different migration approaches align to anticipated outcomes. There’s often pressure from leadership to migrate to the cloud, and understandably with countless cloud benefits and the many challenges associated with aging on-premises solutions. However, understanding and communicating what’s realistic for your organization and how different approaches will address various business goals is crucial.

Data Center Evacuations & Unsupported Technology

Organizations migrating based on a mandated data center evacuation or the security and compliance risks associated with unsupported or end-of-support technology often look to a rehost strategy as a first step. This helps accomplish the business goal of reducing technical debt or remediating compliance concerns quickly.

Reaping the Benefits of Cloud-Native Solutions

There are many other reasons organizations look to the cloud, such as staying competitive, increasing time to value, or the ability to innovate. To fully realize the cloud outcomes that motivate these decisions – including greater flexibility, scalability, data security, built-in disaster recovery options, and improved collaboration – additional planning and refactoring of on-premises applications are often required. In these cases, sometimes we see a rehost as the first stage (as leadership wants to see quick results or has made a public commitment to migrate to the cloud), followed by more advanced modernization efforts.

To get to the root of goals and expectations, consider the following questions as you build your roadmap:

  1. What are your business objectives for cloud adoption, and how will they help further the company vision?
  2. Is there a set timeline to complete the cloud migration effort?
  3. What internal and external resources are available to support a cloud migration?
  4. How many applications are in your portfolio, and do you plan to migrate everything, or are you considering a hybrid model approach?
  5. What are the technical requirements and interdependencies of your applications? How will you assess cloud readiness?
  6. What are the necessary governance, security, and compliance considerations?
  7. Who will be responsible for moving workloads to the new cloud platform? Who will perform the migration, and manage the workloads? Will you be doing it by yourself, or will it be a shared initiative?
  8. How do you intend to use automation to reduce manual efforts and streamline provisioning and governance?

As you answer the questions above, you may find that a rehost effort is sufficient. Likewise, you may choose to explore a lead horse application approach as part of your migration strategy to better understand the value derived from various modernization tactics.

Uncovered Benefits of the Cloud

Uncover Additional Cloud Benefits

If your organization is interested in exploring cloud benefits that go beyond what a rehost effort can provide, migration options that are more involved than rehosting may be worth your consideration. Organizations looking to modernize through re-platforming or refactoring may be motivated by cloud benefits such as:

  • Faster time to market, product release cycles, and/or pace of innovation
  • Enriched customer and end-user experiences
  • Improved employee technology, collaboration, and processes
  • Better reliability and networking speeds
  • Reduced cost of labor and/or maintenance
  • Ability to leverage emerging technology
  • Built-in disaster recovery options
  • Flexibility and scalability
  • Data security
  • Cost allocation for budgeting and showback/chargeback
  • Move from Capex to Opex (or realize Capex by buying resource commitments)

If you are facing tight timelines to migrate, a rehost effort can get you one step closer to realizing the above benefits. Through an initial migration, you can look to a proof of concept to gain a further understanding of the business impact various approaches have to offer while incrementally progressing cloud transformation.



While rehosting is a faster, less resource-intensive migration approach and a great first step into the cloud, it comes with challenges and limitations.

The primary limitation of migrating certain on-premises applications to the cloud is the application’s inherent cloud compatibility. Specific applications have internal and external dependencies which limit their ability to take advantage of more advanced cloud benefits once rehosted.

While rehosting allows you to modernize the application environment, resulting in outcomes such as reduced Data Center costs, other cloud benefits aren’t fully realized. Outcomes such as elasticity and the ability to take advantage of cloud-native features are not available with a strictly rehost strategy.

While more cost-effective than on-premises hosting, sometimes it can be more costly to run applications in the cloud when rehosting, versus re-platforming or refactoring without a FinOps strategy to master the unit economics of cloud for competitive advantage. To show fast progress, rehosting is often a great transitional stage for working towards a cost-effective cloud solution, especially for organizations on a tight timeline. During this stage, managing cloud costs and realizing cloud value with a FinOps practice is key.

Feeling Stuck?

Not Sure Where to Start?

If you’re stuck in analysis paralysis, work with a consultant that’s been through various migration projects, from start to finish, and understands the common challenges and complexities of different approaches.

Whether you’re considering Azure, Office 365, Power Platform, or another cloud platform, AIS has a range of Adoption Frameworks and Assessments that can help you understand your options. With our help, create a shared vision, and align business goals to the appropriate migration approaches.


SPFx Modern Web Development

SharePoint has been a widely adopted and popular content management system for many large organizations over the past two decades. From SharePoint Portal Server in 2001 to SharePoint Server 2019 and SharePoint Online, the ability to customize the user experience (UX) has evolved dramatically, keeping pace with the evolution of modern web design. SharePoint Framework (SPFx) is a page and web part model that provides full support of client-side SharePoint development with support for open sources. SPFx works in SharePoint Online, and with on-premises SharePoint 2016, and SharePoint 2019. SPFx works both in modern and classic pages. SPFx Web Parts and Extensions are the latest powerful tool we can use to deliver great UX!

Advantages of using SharePoint Framework

1. It Can’t Harm the Farm

Earlier SharePoint (SP) customization code executed on the server from compiled, server-side code is written in a language such as C#. Historically, we created web parts as full trust C# assemblies that were installed on the SharePoint servers and had access to disrupt SharePoint for all users. Because it ran with far greater permissions on the server, it could adversely impact or even crash the entire farm. Microsoft tried to solve this problem in SP 2010 by implementing Sandbox solutions, followed by the App Model, now known as Add-In model.

SPFx development is based on JavaScript running in a browser, making REST API calls to the SharePoint and Office 365 back-end workloads and does not touch the internals of SharePoint.

The SharePoint Framework is a safer, lower-risk model for SharePoint development.

2. Modern Development Tools

Building SPFx elements using JavaScript and its wealth of libraries, the UX and UI can be shaped as beautifully as any modern website. The JavaScript is embedded directly to the page, and the controls are rendered in the normal page DOM.

SharePoint Framework development is JavaScript framework-agnostic. The toolchain is based on common open-source client development tools such as npm, TypeScript, Yeoman, webpack, and gulp. It supports open-source JavaScript libraries such as Node.js, React.js, Angular.js, Handlebars, Knockout, and more. These provide a lightweight and rapid user experience.

3. Mobile-First Design

“Mobile first”, as the name suggests, means that we start the product design from the mobile end, which has more restrictions to make the content usable in the small space of a phone. Next, we can expand those features to a more luxurious space to create a tablet or desktop version.

Because SharePoint Framework customizations run in the context of the current page (and not in an IFRAME), they are responsive, lightweight, accessible, and mobile-friendly. Mobile support is built-in from the start. Content reflows across device sizes and pages are fast and fluid.

4. Simplified Deployment

There is some work to do at the beginning of a new project to set up the SPFx structure to support reading from a remote host. An App Catalog must be created, as well as generating and uploading a manifest file. If the hosted content is connected with a CDN (Content Delivery Network), that will also require setup. However, once those structural pieces are in place, deployment is simplified to updating files on the host location. It does not require traditional code deployments of server-side code, with its attendant restrictions and security review lead time.

5. Easier Integration of External Data Sources

With SPFx, calls to data from external sources may be easier since it’s web content hosted outside of SharePoint.

SPFx Constraints and Disadvantages

The SharePoint Framework is only available in SharePoint Online,  on-premises SharePoint 2016, and SharePoint 2019 at the time of this blog. SPFx cannot be added to earlier versions of SharePoint such as SharePoint 2013 and 2010.

SharePoint Framework Extensions cannot be used in on-premises SharePoint 2016 but only in SharePoint 2019 and SharePoint Online.

SPFx, like any other client-side implementation, runs in the context of the logged-in user. The permissions cannot be elevated to impersonate as an admin user like in farm solutions, CSOM (client-side object model) context, or in SharePoint Add-ins and Office 365 web applications. The SharePoint application functionality is limited to the current user’s permission level, and customization is based on that as well. To overcome this constraint, a hybrid solution implementation with SPFx to communicate with Application Programming Interfaces (APIs). APIs would be registered as SharePoint add-in, that uses the app-only context to communicate with SharePoint. For this communication between SPFx and API to work, the API would need to support CORS (Cross-Origin Resource Sharing) as the communication would be through cross-domain client-side calls.

SPFx is also not for long-running operations as it is entirely client-side implementation. The web request cannot wait longer until it gets the response from the long-running web operation. Hence for those processes, the hybrid approach with long operations can be implemented in Azure web job/function, and SPFx can get the update from those via webhook.

Developers coming from server-side will have a learning curve with entirely client-side development. But TypeScript is there for the rescue.

SPFx Comparison to Other Technologies and Models

SharePoint lists come in handy for many organizations when entering data, but customers always ask for the ability to display the data in some reporting format, such as a dashboard. Below we compare the different ways we can accomplish this and why SPFx is a good fit:

  • Classic Web Part Pages: If we do not want to use the SharePoint Framework, SharePoint 2019 still supports the classic web part pages. You can add content editor web parts and deploy any custom JavaScript/jQuery code. However, with this approach, uploading the Js files in the SP library and manually adding pages in a library become cumbersome. We may end up writing custom JSOM (JavaScript object model) code to make the deployment easier. Microsoft does not recommend this approach, and there is the possibility that this will no longer be supported in the future. Also, with this approach, if you want to render any custom tables, you need to write custom code or use a third-party table. Using SharePoint Framework, we can easily use Office UI Fabric React components like Details list.
  • Custom App: We can design custom applications to deploy in the cloud, which can read the data from SharePoint. The challenge is that each customer environment is different. It’s not always easy to connect to SharePoint from the cloud in a production environment, especially with CAC (Common Access Card) authenticated sites.
  • Power Apps/LogicApps: With newer technologies such as Power Apps, Logic Apps, and Flow, we can design custom SharePoint Forms and business logic and connect to SharePoint using the SharePoint connector. In a production environment, it is not easy to get connection approved and to connect with on-premises data. Power Apps and Flow require the purchase of licenses.

Using SPFx, we can quickly design the dashboards using Office UI Fabric components. For deployment, we do not need to write any custom utility code, SharePoint framework package can create the lists and libraries as well.

Wrapping Up

We hope this blog provided an SPFx overview and its great functionalities. Please look forward to our next blog post (Part II) in developing and deploying custom SPFx Web Parts, Extensions, and connecting to API’s/Azure in SharePoint Online and SharePoint 2019!

Additional Links to get started in SPFx

Blazor is coming! In fact, it’s coming sooner than later. Check out ASP.NET’s blog post from April 18, 2019 announcing the official preview.

What is Blazor?

by Vivek Gunnala 

Blazor is a new interactive .Net Web framework, which is part of the open-source .Net platform. Blazor uses C#, HTML, CSS and Razor components instead of JavaScript. It’s built on open web standards without the need for any plugins or code transpilation, and it works on all modern web browsers, hence called “.Net in the browser”, the C# code is directly run on the browser using WebAssembly. Both client-side code and server-side code is developed in C#, which allows you to reuse code and libraries between both sides, such as validations, models, etc.

Apps built in Blazor can use existing .Net libraries by leveraging .Net Standard, allowing the same code to be used across platforms. Since it is an experimental project, Blazor is evolving rapidly with over 60,000 contributors.

About WebAssembly

At a high-level, WebAssembly is explained on the official site as, “a binary instruction format a stack-based virtual machine. It is designed as a portable target for compilation of high-level languages, enabling deployment on the web for client and server applications.”

Should I Use Blazor For My Next Project?

by Ash Tewari 

Blazor’s development status has been promoted from an “Experimental” project to a committed product. This is great news. Blazor is available now as an official preview. Let’s review the factors you should consider when making decisions about adopting Client-Side Blazor for your next production project.

Mono.wasm (The .NET runtime compiled into WebAssembly executing your .NET assemblies in the browser) does not interact with the DOM directly. It goes through JS Interop, which is expensive. The areas where .NET Code will get the most net benefit is in the Model and Business Logic, not the DOM manipulation. If your application is very chatty with the DOM, you might need to carefully assess whether you are getting the expected performance boost from WebAssembly execution of your .NET assemblies. [https://webassemblycode.com/webassembly-cant-access-dom/]

Currently, only the mono runtime is compiled to WebAssembly. Your .NET code is executed as-is. This means that your .NET code is essentially going through two interpreters and it has a noticeable performance impact. There is work being done to compile .NET assemblies to wasm. That and other related improvements in linking and compiling) is expected to improve the performance. The fact that Microsoft has decided to commit Blazor as a product indicates that there is confidence that these performance improvements are likely to become a reality.
[https://www.mono-project.com/news/2018/01/16/mono-static-webassembly-compilation/, https://github.com/WebAssembly/reference-types/blob/master/proposals/reference-types/Overview.md]

In the client-side hosting model, your code is still running in the browser sandbox. So, you don’t have any access to FileSystem and other OS libraries. This limitation applies to javascript as well. In fact, WebAssembly is executed by the Javascript runtime. Yes, the same runtime which is executing the javascript in your web application.

Well, if WebAssembly is executed by the same Javascript runtime, then where are the performance gains everyone is touting about coming from? The answer is that the performance gains come from skipping the parsing steps and/or optimizing compilation steps. The WebAssembly is decoded and JITed instead of parsed and compiled before the JIT step. However, there is work still ongoing to make .NET IL interpretation reach the performance levels required to fulfill the promises

Remember that your Blazor code executes in the UI thread of the browser, which can create a bottleneck if your application is CPU bound. Ironically, the CPU/computationally intensive applications are also one of the most compelling use-cases for Blazor. You may need to look into running Blazor components in the Web Worker. We will cover this in a separate blog post dedicated to this technique.

Server-Side Blazor

by Sean McGettrick 

Server-side Blazor, previously referred to as Razor Components, allows developers the same freedom to create UI components using C# instead of Javascript that client-side Blazor does. The primary difference being that the code is hosted on the server instead of the browser. Blazor components and application logic written to run client-side can also be used server-side.

Razor Components support all the functionality a front-end developer would expect in a modern library including:

  • Parameterization
  • Event handling
  • 2-way data binding
  • Routing
  • Dependency injection
  • Layouts
  • Templating
  • CSS cascading

Razor Components can be nested and reused, similar to React.

Differences from Client-Side

With server-side Blazor, all components are hosted and served from an ASP.NET Core server instead of being run in the browser via WASM. Communication between client and server are handled via SignalR.

Further differences between client and server-side Blazor will be outlined in the next two sections.


Server-side Blazor offers a number of advantages over its client-side counterpart. These include:

  • No WASM dependencies. Older desktop browsers and some current mobile browsers lack support for WASM. Since server-side Blazor only requires the browser to be able to support Javascript it can run on more platforms.
  • Building on the last point, since the components and application logic sit server-side, the application is not restricted to the capabilities of the browser.
  • Developing the application on an entirely server-based platform allows you access to more mature .NET runtime and tooling support.
  • Razor components have access to any .NET Core compatible API.
  • Application load times in the browser are faster due to a smaller footprint. Only the SignalR Javascript code required to run the application is downloaded to the client.


There are, however, some disadvantages to using server-side Blazor:

  • There is higher application latency due to user interactions requiring a network round-trip between the browser and the server.
  • Since the application is entirely hosted on the server, there is no offline support. If the server goes down, the application will not function which breaks one of the core tenets of building a Progressive Web Application (“Connectivity independent: Service workers allow work offline, or on low-quality networks”).
  • With the server being responsible for maintaining client state and connections, this can create difficulty in scaling the application since the server is doing all the work.
  • The application must be hosted on an ASP.NET Core server.

Server-Side Blazor Code Re-Use, Razor Pages to Blazor using an MVVM approach

by Adam Vincent 

What is MVVM?

In a nutshell, MVVM is a design pattern derived from the Model-View-Presenter (MVP) pattern. The Model-View-Controller (MVC) pattern is also derived from MVP, but where MVC is suited to sit on top of a stateless HTTP protocol, MVVM is suited for user interface (UI) platforms with state and two-way data binding. MVVM is commonly implemented in Desktop (WPF / UWP), Web (Silverlight), and Mobile (Xamarin.Forms) applications. Like the other frameworks, Blazor acts much like a Single Page Application (SPA) that has two-way data binding and can benefit from the MVVM pattern. So whether you have existing MVVM code in the form of a WPF or mobile application, or are starting green with new code, you can leverage MVVM to re-use your existing code in Blazor, or share your code with other platforms.

You can find more information on MVVM on Wikipedia.

Example Presentation Layer


At the heart of MVVM is the INotifyPropertyChanged interface which notifies clients that a property has changed. It is through this interface that converts a user interaction into your code being called. Usually, all ViewModels, and some Models will implement INotifyPropertyChanged therefore, it is common to either use a library (Prism, MVVM Light, Caliburn) or to create your own base class. What follows is a minimal implementation of INotifyPropertyChanged.

public abstract class BindableBase : INotifyPropertyChanged
    protected bool SetField<T>(ref T field, T value, [CallerMemberName] string propertyName = null)
        if (EqualityComparer<T>.Default.Equals(field, value)) return false;
        field = value;
        return true;
    public event PropertyChangedEventHandler PropertyChanged;
    protected void OnPropertyChanged([CallerMemberName] string propertyName = null)
        PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));

In this simplified model class, which derives from BindableBase, we have a CustomerModel with a single property FirstName. In this context we would probably have a customer filling out an input within a form on a website where they must fill in their first name. This input would be bound to an instance of CustomerModel on the ViewModel. While the customer is filling out the form, since we are in a two-way data binding scenario, each time the customer enters or removes a character from the form’s input box, SetField() is called and will cause the PropertyChanged event to fire.

public class NewCustomerModel : BindableBase
    private string firstName;
    public string FirstName
        get => firstName;
            SetField(ref firstName, value);

Learn More: If you need to know more about INotifyPropertyChanged the Microsoft Docs cover this topic very well.


With INotifyPropertyChanged out of the way, here is the entire presentation model.

public class NewCustomerModel : BindableBase
    [Display(Name = "Customer Number")]
    public string CustomerNumber { get; set; }
    [Display(Name = "Full Name")]
    public string FullName => $"{FirstName} {LastName}";
    private string firstName;
    [Display(Name = "First Name")]
    public string FirstName
        get => firstName;
            SetField(ref firstName, value);
    private string lastName;
    [Display(Name = "Last Name")]
    public string LastName
        get => lastName;
            SetField(ref lastName, value);
    [Display(Name = "Address")]
    public string Address => $"{Street}, {City}, {State} {PostalCode}";
    private string street;
    [Display(Name = "Street Address")]
    public string Street
        get => street;
            SetField(ref street, value);
    private string city;
    [Display(Name = "City")]
    public string City
        get => city;
            SetField(ref city, value);
    private string state;
    [Display(Name = "State")]
    public string State
        get => state;
            SetField(ref state, value);
    private string postalCode;
    [Display(Name = "Zip Code")]
    public string PostalCode
        get => postalCode;
            SetField(ref postalCode, value);

There are a few things to point out in this presentation model. First, please note the use of the Data Annotation attributes such as [Required]. You can decorate your properties to provide rich form validation feedback to your users. When the customer is filling out a form and misses a required field it will not pass the model validation. This will prevent the form from being submitted as well as provide an error message if one is configured. We will cover this more in the View section

The next thing I wanted to point out is I’ve covered SetField() in the INotifyPropertyChanged section, but there is an additional bit of complexity.

[Display(Name = "Full Name")]
public string FullName => $"{FirstName} {LastName}";

Note that the FullName property is a { get; }-only concatenation of the customer’s first and last name. Since we are forcing the customer to fill out first and last name in a separate form field, changing either the first or last name causes the FullName to change. We want the ViewModel to be informed of any changes to FullName.

private string firstName;
[Display(Name = "First Name")]
public string FirstName
    get => firstName;
        SetField(ref firstName, value);

After the SetField() is invoked in the base class, there is an additional call to OnPropertyChanged(), which lets the ViewModel know that in addition to FirstName changing, FullName has also changed.

Example ViewModel Interface

The example ViewModel below will expand on the model above. We’ll be using a simplified user story of “Creating a New Customer.”

Blazor supports .NET Core’s dependency injection out of the box, which makes injecting a ViewModel very simple. In the following ViewModel interface, we’ll need our concrete class to have an instance of NewCustomer as well as a method which knows how to create a new customer.

public interface ICustomerCreateViewModel
    NewCustomerModel NewCustomer { get; set; }
    void Create();

And the concrete implementation of ICustomerCreateViewModel:

public class CustomerCreateViewModel : ICustomerCreateViewModel
    private readonly ICustomerService _customerService;
    public CustomerCreateViewModel(ICustomerService customerService)
        _customerService = customerService;
    public NewCustomerModel NewCustomer { get; set; } = new NewCustomerModel();
    public void Create()
        //map presentation model to the data layer entity
        var customer = new NewCustomer()
            CustomerNumber = Guid.NewGuid().ToString().Split('-')[0],
            FullName = $"{newCustomer.FirstName} {NewCustomer.LastName}",
            Address = $"{newCustomer.Address}, {NewCustomer.City}, {newCustomer.State} {NewCustomer.PostalCode}"

ViewModel Deep-Dive

In the constructor, we’re getting an instance of our ICustomerService which knows how to create new customers when provided the data layer entity called NewCustomer.

I need to point out that NewCustomer and NewCustomerModel serve two different purposes. NewCustomer, a simple class object, is the data entity used to persist the item. NewCustomerModel is the presentation model. In this example, we save the customer’s full name as a single column in a database (and is a single property in NewCustomer), but on the form backed by the NewCustomerModel presentation model, we want the customer to fill out multiple properties, ‘First Name’ and ‘Last Name’.

In the ViewModel, the Create() method shows how a NewCustomerModel is mapped to a NewCustomer. There are some tools that are very good at doing this type of mapping (like AutoMapper), but for this example the amount of code to map between the types is trivial. For reference, what follows is the data entity.

public class NewCustomer
        public string CustomerNumber { get; set; }
        public string FullName { get; set; }
        public string Address { get; set; }

Opinionated Note: Presentation models and data entities should be separated into their respective layers. It is possible to create a single CustomerModel and use it for both presentation and data layers to reduce code duplication, but I highly discourage this practice.


The last and final piece to the MVVM pattern is the View. The View in the context of Blazor is either a Page or Component, which is either a .razor file, or a .cshtml file and contains Razor code. Razor code is a mix of C# and HTML markup. In the context of this article, our view will be a customer form that can be filled out. There is also a button that calls the ViewModel’s Create() method when the form has been filled out properly according to the validation rules.

@page "/customer/create"
@using HappyStorage.Common.Ui.Customers
@using HappyStorage.BlazorWeb.Components
@inject Microsoft.AspNetCore.Components.IUriHelper UriHelper
@inject HappyStorage.Common.Ui.Customers.ICustomerCreateViewModel viewModel
<h1>Create Customer</h1>
<EditForm Model="@viewModel.NewCustomer" OnValidSubmit="@HandleValidSubmit">
    <DataAnnotationsValidator />
    <ValidationSummary />
    <div class="form-group">
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.FirstName)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.FirstName" />
        <LabelComponent labelFor="(() => viewModel.NewCustomer.LastName)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.LastName" />
    <div class="form-group">
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.Street)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.Street" />
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.City)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.City" />
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.State)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.State" />
        <LabelComponent labelFor="@(() => viewModel.NewCustomer.PostalCode)" />
        <InputText class="form-control" bind-Value="@viewModel.NewCustomer.PostalCode" />
    <br />
    <button class="btn btn-primary" type="submit">Submit</button>
    <button class="btn" type="button" onclick="@ReturnToList">Cancel</button>

The first thing to note is at the top of the code. This is how we use dependency injection to get an instance of our ViewModel.

@inject HappyStorage.Common.Ui.Customers.ICustomerCreateViewModel viewModel

Easy! Next, we need to create the form. The  needs an instance of a model to bind to, our NewCustomer ViewModel, and a method to call when the user submits a valid form.

<EditForm Model="@viewModel.NewCustomer" OnValidSubmit="@HandleValidSubmit">

Next, we bind each property to their respective input fields. Blazor has some built-in   helpers which help you accomplish the binding. They are still under development and you may find some features are lacking at the time of writing. Please refer to the docs in the note below for more up-to-date info.

Note: The  is something I’ve created as a replacement for the asp-for  tag-helper that retrieves the DisplayAttribute from the presentation model classes. That code is available in the GitHub repository listed at the top.

<LabelComponent labelFor="@(() => viewModel.NewCustomer.FirstName)" />
<InputText class="form-control" bind-Value="@viewModel.NewCustomer.FirstName" />
<LabelComponent labelFor="(() => viewModel.NewCustomer.LastName)" />
<InputText class="form-control" bind-Value="@viewModel.NewCustomer.LastName" />

The magic here is bind-Value which binds our  text box to the value of the ViewModel’s instance of the NewCustomerModel presentation model.

Note: You can view full documentation on Blazor Forms and Validation here.

Last but not least, we’ll need some code to call our ViewModel’s Create() method when the form is submitted and valid. We’ll also need the onclick=ReturnToList I’ve defined for the Cancel button.

@functions {
    private void HandleValidSubmit()
    private void ReturnToList()


That’s it! In summary, I’ve covered what MVVM is, how Blazor can benefit from it, as well as an in-depth look at a simple example of how we can create a form with validation and rich feedback to the user. It is also important to reiterate that this example works not only in Blazor but can also be used in Windows Presentation Foundation (WPF) desktop applications as well as on other platforms. Please check out the GitHub repository as I continue to develop and expand on this concept.

Developer Gotchas

by Morgan Baker 

Working with a new framework like Blazor always has its learning experiences. The goal of this section is to help alleviate headaches by providing common problems and solutions we encountered with Blazor.

  • My localization isn’t working!
    For this problem, check your route parameters. Depending on the type of parameter, the invariant culture is used by the route by default, allowing for no localization for URLs. This can be solved by allowing the parameter to be passed in as any type, and then validating the type in C# code before using it.
  • I can’t debug my C# code!
    Server-side debugging for Blazor doesn’t exist yet, but you’ll still be able to debug the whole application (assuming your server-side is using ASP.NET Core).
  • I can’t see my C# in the browser!
    C# code in Blazor is compiled through WebAssembly before being delivered to the browser. When this happens, the C# can’t be displayed in the browser. However, you can still see the code in Chrome through remote debugging. Follow these steps.
  • Why isn’t my new route working?
    Most of the time you’ll need to rebuild the application to get new routes on development applications. Other causes might be naming problems or a problem with the route parameter types.
  • Everything seems to be loading slow
    This can be multiple issues, some of which are not Blazor-specific. However, for the Blazor-specific issues, it varies between server and client. Any page using server-side Blazor must make an HTTP call to the server, which deals a hit to performance. Any site using client-side Blazor will have a long initial load time, then be more relaxed later.
  • I’m seeing a blank page and I set everything up correctly!
    This is a specific one that I ran into when first using the templates in Visual Studio 2019. The solution was making sure I had the right .NET Core SDK installed. You can have the wrong version and still create a Blazor website with no errors, at least until the app starts running. You can install the latest version of the .NET Core SDK here.

Online Resources

by JP Roberts III 

As of the writing of this blog post, Blazor is still a new framework, and as such, is still changing rapidly. Pluralsight doesn’t have any courses covering Blazor, Udemy only has a couple of short videos, and Microsoft’s Learning site has no specific courses dedicated to Blazor.

However, there are several websites that have a good deal of information and samples for developers:

YouTube also has several informative videos on Blazor, such as a two-part series on the Microsoft Visual Studio channel: Blazor – Part 1 , Blazor – Part 2.

In this episode of the Azure Government video series, Steve Michelotti sits down with AIS’ very own Vishwas Lele to discuss migrating and modernizing with Kubernetes on Azure Government. You’ll learn about the traditional approaches for migrating workloads to the cloud, including:

1. Rehost
2. Refactor
3. Reimagine

You will also learn how Kubernetes provides an opportunity to fundamentally rethink these traditional approaches to cloud migration by leveraging Kubernetes in order to get the “best of all worlds” in the migration journey. If you’re looking to migrate your existing legacy workloads to the cloud, while minimizing code changes and taking advantage of innovative cloud-native technologies, this is the video you should watch!


Meet some of the AIS Recruiting Team – They’re going to talk you through some of their top recommended job interview tips.


My name is Francesca Hawk. My name is Rana Shahbazi. My name is Kathleen McGurk. My name is Jenny Wan. My name is Denise Kim.

Tip #1: Be Open, Transparent & Direct

I think it’s important for candidates to be authentic and transparent throughout the entire interview process.

Keeping the line of communication open through the interview process is really important for both sides. If you have other opportunities on the table, say that. The recruiters are your advocates and an essence kind of your best friend. Being direct – give us, you know, enough feedback – if you are not interested, or if you if the commute is an issue, or if you want more money, if your clearance was an issue – just let us know.

Tip #2: Know What You Want

So before even searching for opportunities you have to figure out what you’re looking for in a company. And then once you figure out what you’re looking for – whether it’s the culture of the company, the location the company – definitely asked questions with the recruiter prior to the interview so while you’re at the interview you have a little bit of that information.

Tip #3: Be On Time & Be Prepared

You always want to make sure you’re on time. Generally, you want to arrive about 15 minutes before your interview. You know where you’re going to park, make sure that you look up directions ahead of time. And just be prepared in general.

Preparation is extremely underrated in the interview process, so really doing your research getting familiar with the company and the culture there. Go online. Check out, you know, the general website, check out the job description. Make sure you’re aware of the skills and qualifications and what they’re really looking for. Glassdoor always provides really good reviews from the current employees. I think the company website and certainly LinkedIn is a huge aspect – social media in general.

Tip #4: Ask Questions

Ask questions or have questions ready to us ask. Ask about the process ask about the expectations who you’ll be potentially meeting with, what the potential duration could be. The company can’t provide information unless you ask for it.

You also have to make sure that you are interviewing the company just as much as they’re interviewing you. Ask the interviewers is about the culture because you’re going to get a different response from everybody but if they all seem to check out or are the same then that means the culture is pretty good.

Just make sure that you feel comfortable with the environments that, you know, you’re going to be working in.

Tip #5: Make Sure You Understand the Role

Really use the opportunity to understand the position and then to sell your strengths and also kind of tie it back into your accomplishments.

Make sure that you talk about what you were individually able to accomplish in a project. So you were personally able to
bring to the table and not necessarily what the team accomplished as a whole.

Tip #6: Show Your Interest

I think your presentation and the way you present yourself to the interviewers and anybody that you interact within the interview process is extremely important.

So not just what you say, but how you say it. Eye contact and body language say a lot about your interest in the position and the company as a whole.

Showing your interest makes a recruiter feel that you’re confident and that you can certainly do the role, and also that you are
excited about this opportunity.

I think you should be excited about interviewing a company that you’re interested in. And that sounds silly, but I think that going in excited and I think that’s why body language and eye contact are all very important aspects.

Tip #7: Listen

People are so busy thinking about what they’re going to say next that they don’t actually pay attention to the questions being asked.

So making sure that you’re hearing what they’re saying and then taking the time to respond is really important.

Tip #8: Follow Up

Certainly, you know, asking for next steps is very helpful and also that is another way of expressing your interest. You know, definitely being responsive. I would say the general rule of thumb is within 12 hours of turnaround time. If you’re not interested
and that’s okay if we’re not at AIS where this opportunity is not number one and that’s okay, we like to know that as well.

You definitely want to send a thank you note – it goes a long way and it shows you’re very interested in the company and it always leaves a great impression.

We’re Hiring!

AIS is always looking to connect with talented technologists that are passionate about learning and growing to staff exciting new projects for our commercial and federal clients. If you’re interested in working at AIS, check out our current career openings.

We’re proud to announce that AIS has successfully renewed all six of our Microsoft Gold Partner competencies for 2019. AIS has been consistently recognized as a Microsoft Gold Partner for many years now, and we’re currently distinguished at the Gold level for:

    • DevOps
    • Cloud Platform
    • Cloud Productivity
    • Application Development
    • Application Integration
    • Collaboration and Content

Microsoft Gold Partner Logo

The Microsoft Partner Program: Defining the Levels of Excellence

Each of these achievements is an important benchmark in the competitive world of Microsoft technology partners. Every year, Microsoft evaluates our staff, our project history, and our customer references. A single Gold competency requires employees to hold multiple Microsoft Certified Professional (MCP) certifications, five in-depth customer references, numerous developer exams, and other objectives.

We’re proud that over 70% of our staff maintains relevant certifications, validating our knowledge and expertise and allowing us to reach the Gold level across so many areas of our business. Congrats to the entire AIS team for once again bringing home the Gold!

Interested in learning more about our involvement as a certified Microsoft Gold Partner? Click here to get in touch with a solutions executive or give us a call today at 703-860-7800.